-
-
-
+
The Resource Browser allows you to perform the following actions:
diff --git a/src/content/docs/aws/services/app-auto-scaling.md b/src/content/docs/aws/services/appautoscaling.md
similarity index 86%
rename from src/content/docs/aws/services/app-auto-scaling.md
rename to src/content/docs/aws/services/appautoscaling.md
index 0a358453..d8d9ae5a 100644
--- a/src/content/docs/aws/services/app-auto-scaling.md
+++ b/src/content/docs/aws/services/appautoscaling.md
@@ -1,6 +1,5 @@
---
title: "Application Auto Scaling"
-linkTitle: "Application Auto Scaling"
description: Get started with Application Auto Scaling on LocalStack
tags: ["Base"]
persistence: supported
@@ -14,7 +13,7 @@ With Application Auto Scaling, you can configure automatic scaling for services
Auto scaling uses CloudWatch under the hood to configure scalable targets which a service namespace, resource ID, and scalable dimension can uniquely identify.
LocalStack allows you to use the Application Auto Scaling APIs in your local environment to scale different resources based on scaling policies and scheduled scaling.
-The supported APIs are available on our [API coverage page]({{< ref "coverage_application-autoscaling" >}}), which provides information on the extent of Application Auto Scaling's integration with LocalStack.
+The supported APIs are available on our [API coverage page](), which provides information on the extent of Application Auto Scaling's integration with LocalStack.
## Getting Started
@@ -39,16 +38,15 @@ exports.handler = async (event, context) => {
Run the following command to create a new Lambda function using the [`CreateFunction`](https://docs.aws.amazon.com/cli/latest/reference/lambda/create-function.html) API:
-{{< command >}}
-$ zip function.zip index.js
-
-$ awslocal lambda create-function \
+```bash
+zip function.zip index.js
+awslocal lambda create-function \
--function-name autoscaling-example \
--runtime nodejs18.x \
--zip-file fileb://function.zip \
--handler index.handler \
--role arn:aws:iam::000000000000:role/cool-stacklifter
-{{< /command >}}
+```
### Create a version and alias for your Lambda function
@@ -56,14 +54,14 @@ Next, you can create a version for your Lambda function and publish an alias.
We will use the [`PublishVersion`](https://docs.aws.amazon.com/cli/latest/reference/lambda/publish-version.html) and [`CreateAlias`](https://docs.aws.amazon.com/cli/latest/reference/lambda/create-alias.html) APIs for this.
Run the following commands:
-{{< command >}}
-$ awslocal lambda publish-version --function-name autoscaling-example
-$ awslocal lambda create-alias \
+```bash
+awslocal lambda publish-version --function-name autoscaling-example
+awslocal lambda create-alias \
--function-name autoscaling-example \
--description "alias for blue version of function" \
--function-version 1 \
--name BLUE
-{{< /command >}}
+```
### Register the Lambda function as a scalable target
@@ -72,20 +70,20 @@ We will specify the `--service-namespace` as `lambda`, `--scalable-dimension` as
Run the following command to register the scalable target:
-{{< command >}}
-$ awslocal application-autoscaling register-scalable-target \
+```bash
+awslocal application-autoscaling register-scalable-target \
--service-namespace lambda \
--scalable-dimension lambda:function:ProvisionedConcurrency \
--resource-id function:autoscaling-example:BLUE \
--min-capacity 0 --max-capacity 0
-{{< /command >}}
+```
### Setting up a scheduled action
You can create a scheduled action that scales out by specifying the `--schedule` parameter with a recurring schedule specified as a cron job.
Run the following command to create a scheduled action using the [`PutScheduledAction`](https://docs.aws.amazon.com/cli/latest/reference/application-autoscaling/put-scheduled-action.html) API:
-{{< command >}}
+```bash
awslocal application-autoscaling put-scheduled-action \
--service-namespace lambda \
--scalable-dimension lambda:function:ProvisionedConcurrency \
@@ -93,14 +91,14 @@ awslocal application-autoscaling put-scheduled-action \
--scheduled-action-name lambda-action \
--schedule "cron(*/2* ** *)" \
--scalable-target-action MinCapacity=1,MaxCapacity=5
-{{< /command >}}
+```
You can confirm if the scheduled action exists using [`DescribeScheduledActions`](https://docs.aws.amazon.com/cli/latest/reference/application-autoscaling/describe-scheduled-actions.html) API:
-{{< command >}}
-$ awslocal application-autoscaling describe-scheduled-actions \
+```bash
+awslocal application-autoscaling describe-scheduled-actions \
--service-namespace lambda
-{{< /command >}}
+```
### Setting up a target tracking scaling policy
@@ -110,22 +108,21 @@ When metrics lack data due to minimal application load, Application Auto Scaling
Run the following command to create a target-tracking scaling policy:
-{{< command >}}
-$ awslocal application-autoscaling put-scaling-policy \
+```bash
+awslocal application-autoscaling put-scaling-policy \
--service-namespace lambda \
--scalable-dimension lambda:function:ProvisionedConcurrency \
--resource-id function:events-example:BLUE \
--policy-name scaling-policy --policy-type TargetTrackingScaling \
--target-tracking-scaling-policy-configuration '{ "TargetValue": 50.0, "PredefinedMetricSpecification": { "PredefinedMetricType": "predefinedmetric" }}'
-{{< /command >}}
+```
## Resource Browser
The LocalStack Web Application provides a Resource Browser for managing AppConfig applications.
You can access the Resource Browser by opening the LocalStack Web Application in your browser, navigating to the **Resource Browser** section, and then clicking on **Application Auto Scaling** under the **App Integration** section.
-
-
-
+
The Resource Browser allows you to perform the following actions:
diff --git a/src/content/docs/aws/services/athena.md b/src/content/docs/aws/services/athena.mdx
similarity index 84%
rename from src/content/docs/aws/services/athena.md
rename to src/content/docs/aws/services/athena.mdx
index 0480a8cb..011f694e 100644
--- a/src/content/docs/aws/services/athena.md
+++ b/src/content/docs/aws/services/athena.mdx
@@ -1,10 +1,11 @@
---
title: "Athena"
-linkTitle: "Athena"
description: Get started with Athena on LocalStack
tags: ["Ultimate"]
---
+import { Tabs, TabItem } from '@astrojs/starlight/components';
+
## Introduction
Athena is an interactive query service provided by Amazon Web Services (AWS) that enables you to analyze data stored in S3 using standard SQL queries.
@@ -12,7 +13,7 @@ Athena allows users to create ad-hoc queries to perform data analysis, filter, a
It supports various file formats, such as JSON, Parquet, and CSV, making it compatible with a wide range of data sources.
LocalStack allows you to configure the Athena APIs with a Hive metastore that can connect to the S3 API and query your data directly in your local environment.
-The supported APIs are available on our [API coverage page]({{< ref "coverage_athena" >}}), which provides information on the extent of Athena's integration with LocalStack.
+The supported APIs are available on our [API coverage page](), which provides information on the extent of Athena's integration with LocalStack.
## Getting started
@@ -21,44 +22,44 @@ This guide is designed for users new to Athena and assumes basic knowledge of th
Start your LocalStack container using your preferred method.
We will demonstrate how to create an Athena table and run a query against it in addition to reading the results with the AWS CLI.
-{{< callout >}}
+:::note
To utilize the Athena API, LocalStack will download additional dependencies.
This involves getting a Docker image of around 1.5GB, containing Presto, Hive, and other tools.
These components are retrieved automatically when you initiate the service.
To ensure a smooth initial setup, ensure you're connected to a stable internet connection while fetching these components for the first time.
-{{< /callout >}}
+:::
### Create an S3 bucket
You can create an S3 bucket using the [`mb`](https://docs.aws.amazon.com/cli/latest/reference/s3/mb.html) command.
Run the following command to create a bucket named `athena-bucket`:
-{{< command >}}
-$ awslocal s3 mb s3://athena-bucket
-{{< / command >}}
+```bash
+awslocal s3 mb s3://athena-bucket
+```
You can create some sample data using the following commands:
-{{< command >}}
-$ echo "Name,Service" > data.csv
-$ echo "LocalStack,Athena" >> data.csv
-{{< / command >}}
+```bash
+echo "Name,Service" > data.csv
+echo "LocalStack,Athena" >> data.csv
+```
You can upload the data to your bucket using the [`cp`](https://docs.aws.amazon.com/cli/latest/reference/s3/cp.html) command:
-{{< command >}}
-$ awslocal s3 cp data.csv s3://athena-bucket/data/
-{{< / command >}}
+```bash
+awslocal s3 cp data.csv s3://athena-bucket/data/
+```
### Create an Athena table
You can create an Athena table using the [`CreateTable`](https://docs.aws.amazon.com/athena/latest/APIReference/API_CreateTable.html) API
Run the following command to create a table named `athena_table`:
-{{< command >}}
-$ awslocal athena start-query-execution \
+```bash
+awslocal athena start-query-execution \
--query-string "create external table tbl01 (name STRING, surname STRING) ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' LOCATION 's3://athena-bucket/data/';" --result-configuration "OutputLocation=s3://athena-bucket/output/"
-{{< / command >}}
+```
The following output would be retrieved:
@@ -71,9 +72,9 @@ The following output would be retrieved:
You can retrieve information about the query execution using the [`GetQueryExecution`](https://docs.aws.amazon.com/athena/latest/APIReference/API_GetQueryExecution.html) API.
Run the following command:
-{{< command >}}
-$ awslocal athena get-query-execution --query-execution-id 593acab7
-{{< / command >}}
+```bash
+awslocal athena get-query-execution --query-execution-id 593acab7
+```
Replace `593acab7` with the `QueryExecutionId` returned by the [`StartQueryExecution`](https://docs.aws.amazon.com/athena/latest/APIReference/API_StartQueryExecution.html) API.
@@ -82,27 +83,27 @@ Replace `593acab7` with the `QueryExecutionId` returned by the [`StartQueryExecu
You can get the output of the query using the [`GetQueryResults`](https://docs.aws.amazon.com/athena/latest/APIReference/API_GetQueryResults.html) API.
Run the following command:
-{{< command >}}
-$ awslocal athena get-query-results --query-execution-id 593acab7
-{{< / command >}}
+```bash
+awslocal athena get-query-results --query-execution-id 593acab7
+```
You can now read the data from the `tbl01` table and retrieve the data from S3 that was mentioned in your table creation statement.
Run the following command:
-{{< command >}}
-$ awslocal athena start-query-execution \
+```bash
+awslocal athena start-query-execution \
--query-string "select * from tbl01;" --result-configuration "OutputLocation=s3://athena-bucket/output/"
-{{< / command >}}
+```
You can retrieve the execution details similarly using the [`GetQueryExecution`](https://docs.aws.amazon.com/athena/latest/APIReference/API_GetQueryExecution.html) API using the `QueryExecutionId` returned by the previous step.
You can copy the `ResultConfiguration` from the output and use it to retrieve the results of the query.
Run the following command:
-{{< command >}}
-$ awslocal cp s3://athena-bucket/output/593acab7.csv .
-$ cat 593acab7.csv
-{{< / command >}}
+```bash
+awslocal cp s3://athena-bucket/output/593acab7.csv .
+cat 593acab7.csv
+```
Replace `593acab7.csv` with the path to the file that was present in the `ResultConfiguration` of the previous step.
You can also use the [`GetQueryResults`](https://docs.aws.amazon.com/athena/latest/APIReference/API_GetQueryResults.html) API to retrieve the results of the query.
@@ -117,34 +118,37 @@ The Delta Lake files used in this sample are available in a public S3 bucket und
For your convenience, we have prepared the test files in a downloadable ZIP file [here](https://localstack-assets.s3.amazonaws.com/aws-sample-athena-delta-lake.zip).
We start by downloading and extracting this ZIP file:
-{{< command >}}
-$ mkdir /tmp/delta-lake-sample; cd /tmp/delta-lake-sample
-$ wget https://localstack-assets.s3.amazonaws.com/aws-sample-athena-delta-lake.zip
-$ unzip aws-sample-athena-delta-lake.zip; rm aws-sample-athena-delta-lake.zip
-{{< / command >}}
+```bash
+mkdir /tmp/delta-lake-sample; cd /tmp/delta-lake-sample
+wget https://localstack-assets.s3.amazonaws.com/aws-sample-athena-delta-lake.zip
+unzip aws-sample-athena-delta-lake.zip; rm aws-sample-athena-delta-lake.zip
+```
We can then create an S3 bucket in LocalStack using the [`awslocal`](https://github.com/localstack/awscli-local) command line, and upload the files to the bucket:
-{{< command >}}
-$ awslocal s3 mb s3://test
-$ awslocal s3 sync /tmp/delta-lake-sample s3://test
-{{< / command >}}
+
+```bash
+awslocal s3 mb s3://test
+awslocal s3 sync /tmp/delta-lake-sample s3://test
+```
Next, we create the table definitions in Athena:
-{{< command >}}
-$ awslocal athena start-query-execution \
+
+```bash
+awslocal athena start-query-execution \
--query-string "CREATE EXTERNAL TABLE test (product_id string, product_name string, \
price bigint, currency string, category string, updated_at double) \
LOCATION 's3://test/' TBLPROPERTIES ('table_type'='DELTA')"
-{{< / command >}}
+```
Please note that this query may take some time to finish executing.
You can observe the output in the LocalStack container (ideally with `DEBUG=1` enabled) to follow the steps of the query execution.
Finally, we can now run a `SELECT` query to extract data from the Delta Lake table we've just created:
-{{< command >}}
-$ queryId=$(awslocal athena start-query-execution --query-string "SELECT * from deltalake.default.test" | jq -r .QueryExecutionId)
-$ awslocal athena get-query-results --query-execution-id $queryId
-{{< / command >}}
+
+```bash
+queryId=$(awslocal athena start-query-execution --query-string "SELECT * from deltalake.default.test" | jq -r .QueryExecutionId)
+awslocal athena get-query-results --query-execution-id $queryId
+```
The query should yield a result similar to the output below:
@@ -175,9 +179,9 @@ The query should yield a result similar to the output below:
...
```
-{{< callout >}}
+:::note
The `SELECT` statement above currently requires us to prefix the database/table name with `deltalake.` - this will be further improved in a future iteration, for better parity with AWS.
-{{< /callout >}}
+:::
## Iceberg Tables
@@ -210,8 +214,10 @@ s3://mybucket/prefix/temp/
You can configure the Athena service in LocalStack with various clients, such as [PyAthena](https://github.com/laughingman7743/PyAthena/), [awswrangler](https://github.com/aws/aws-sdk-pandas), among others!
Here are small snippets to get you started:
-{{< tabpane lang="python" >}}
-{{< tab header="PyAthena" lang="python" >}}
+
+
The Resource Browser allows you to perform the following actions:
diff --git a/src/content/docs/aws/services/auto-scaling.md b/src/content/docs/aws/services/autoscaling.md
similarity index 84%
rename from src/content/docs/aws/services/auto-scaling.md
rename to src/content/docs/aws/services/autoscaling.md
index ba7d4514..d525b78e 100644
--- a/src/content/docs/aws/services/auto-scaling.md
+++ b/src/content/docs/aws/services/autoscaling.md
@@ -1,7 +1,6 @@
---
title: "Auto Scaling"
-linkTitle: "Auto Scaling"
-description: Get started with Auto Scaling" on LocalStack
+description: Get started with Auto Scaling on LocalStack
tags: ["Base"]
---
@@ -11,7 +10,7 @@ Auto Scaling helps you maintain application availability and allows you to autom
You can use Auto Scaling to ensure that you are running your desired number of instances.
LocalStack allows you to use the Auto Scaling APIs locally to create and manage Auto Scaling groups locally.
-The supported APIs are available on our [API coverage page]({{< ref "coverage_autoscaling" >}}), which provides information on the extent of Auto Scaling's integration with LocalStack.
+The supported APIs are available on our [API coverage page](), which provides information on the extent of Auto Scaling's integration with LocalStack.
## Getting started
@@ -25,12 +24,12 @@ We will demonstrate how you can create a launch template, an Auto Scaling group,
You can create a launch template that defines the launch configuration for the instances in the Auto Scaling group using the [`CreateLaunchTemplate`](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_CreateLaunchTemplate.html) API.
Run the following command to create a launch template:
-{{< command >}}
-$ awslocal ec2 create-launch-template \
+```bash
+awslocal ec2 create-launch-template \
--launch-template-name my-template-for-auto-scaling \
--version-description version1 \
--launch-template-data '{"ImageId":"ami-ff0fea8310f3","InstanceType":"t2.micro"}'
-{{< /command >}}
+```
The following output is displayed:
@@ -53,30 +52,30 @@ The following output is displayed:
Before creating an Auto Scaling group, you need to fetch the subnet ID.
Run the following command to describe the subnets:
-{{< command >}}
-$ awslocal ec2 describe-subnets --output text --query Subnets[0].SubnetId
-{{< /command >}}
+```bash
+awslocal ec2 describe-subnets --output text --query Subnets[0].SubnetId
+```
Copy the subnet ID from the output and use it to create the Auto Scaling group.
Run the following command to create an Auto Scaling group using the [`CreateAutoScalingGroup`](https://docs.aws.amazon.com/autoscaling/ec2/APIReference/API_CreateAutoScalingGroup.html) API:
-{{< command >}}
-$ awslocal autoscaling create-auto-scaling-group \
+```bash
+awslocal autoscaling create-auto-scaling-group \
--auto-scaling-group-name my-asg \
--launch-template LaunchTemplateId=lt-5ccdf1a84f178ba44 \
--min-size 1 \
--max-size 5 \
--vpc-zone-identifier 'subnet-d4d16268'
-{{< /command >}}
+```
### Describe the Auto Scaling group
You can describe the Auto Scaling group using the [`DescribeAutoScalingGroups`](https://docs.aws.amazon.com/autoscaling/ec2/APIReference/API_DescribeAutoScalingGroups.html) API.
Run the following command to describe the Auto Scaling group:
-{{< command >}}
-$ awslocal autoscaling describe-auto-scaling-groups
-{{< /command >}}
+```bash
+awslocal autoscaling describe-auto-scaling-groups
+```
The following output is displayed:
@@ -119,23 +118,23 @@ You can attach an instance to the Auto Scaling group using the [`AttachInstances
Before that, create an EC2 instance using the [`RunInstances`](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_RunInstances.html) API.
Run the following command to create an EC2 instance locally:
-{{< command >}}
-$ awslocal ec2 run-instances \
+```bash
+awslocal ec2 run-instances \
--image-id ami-ff0fea8310f3 --count 1
-{{< /command >}}
+```
Fetch the instance ID from the output and use it to attach the instance to the Auto Scaling group.
Run the following command to attach the instance to the Auto Scaling group:
-{{< command >}}
-$ awslocal autoscaling attach-instances \
+```bash
+awslocal autoscaling attach-instances \
--instance-ids i-0d678c4ecf6018dde \
--auto-scaling-group-name my-asg
-{{< /command >}}
+```
Replace `i-0d678c4ecf6018dde` with the instance ID that you fetched from the output.
## Current Limitations
-LocalStack does not support the `docker`/`libvirt` [VM manager for EC2]({{< ref "/user-guide/aws/ec2/#vm-managers" >}}).
+LocalStack does not support the `docker`/`libvirt` [VM manager for EC2](/aws/services/ec2/#vm-managers).
It only works with the `mock` VM manager.
diff --git a/src/content/docs/aws/services/backup.md b/src/content/docs/aws/services/backup.md
index e90413b0..d8e9d572 100644
--- a/src/content/docs/aws/services/backup.md
+++ b/src/content/docs/aws/services/backup.md
@@ -1,6 +1,5 @@
---
title: "Backup"
-linkTitle: "Backup"
description: Get started with Backup on LocalStack
tags: ["Ultimate"]
persistence: supported
@@ -14,7 +13,7 @@ Backup supports a wide range of AWS resources, including Elastic Block Store (EB
Backup enables you to set backup retention policies, allowing you to specify how long you want to retain your backup copies.
LocalStack allows you to use the Backup APIs in your local environment to manage backup plans, create scheduled or on-demand backups of certain resource types.
-The supported APIs are available on our [API coverage page]({{< ref "coverage_backup" >}}), which provides information on the extent of Backup's integration with LocalStack.
+The supported APIs are available on our [API coverage page](), which provides information on the extent of Backup's integration with LocalStack.
## Getting started
@@ -28,10 +27,10 @@ We will demonstrate how to create a backup job and specify a set of resources to
You can create a backup vault which acts as a logical container where backups are stored using the [`CreateBackupVault`](https://docs.aws.amazon.com/aws-backup/latest/devguide/API_CreateBackupVault.html) API.
Run the following command to create a backup vault named `my-vault`:
-{{< command >}}
-$ awslocal backup create-backup-vault \
+```bash
+awslocal backup create-backup-vault \
--backup-vault-name primary
-{{< / command >}}
+```
The following output would be retrieved:
@@ -73,10 +72,10 @@ You can specify the backup plan in a `backup-plan.json` file:
You can use the [`CreateBackupPlan`](https://docs.aws.amazon.com/aws-backup/latest/devguide/API_CreateBackupPlan.html) API to create a backup plan.
Run the following command to create a backup plan:
-{{< command >}}
-$ awslocal backup create-backup-plan \
+```bash
+awslocal backup create-backup-plan \
--backup-plan file://backup-plan.json
-{{< / command >}}
+```
The following output would be retrieved:
@@ -111,11 +110,11 @@ You can specify the backup selection in a `backup-selection.json` file:
You can use the [`CreateBackupSelection`](https://docs.aws.amazon.com/aws-backup/latest/devguide/API_CreateBackupSelection.html) API to create a backup selection.
Run the following command to create a backup selection:
-{{< command >}}
-$ awslocal backup create-backup-selection \
+```bash
+awslocal backup create-backup-selection \
--backup-plan-id 9337aba3 \
--backup-selection file://backup-plan-resources.json
-{{< / command >}}
+```
Replace the `--backup-plan-id` value with the `BackupPlanId` value from the output of the previous command.
The following output would be retrieved:
@@ -133,7 +132,7 @@ The following output would be retrieved:
The LocalStack Web Application provides a Resource Browser for managing backup plans and vaults.
You can access the Resource Browser by opening the LocalStack Web Application in your browser, navigating to the **Resources** section, and then clicking on **Backup** under the **Storage** section.
-
+
The Resource Browser allows you to perform the following actions:
diff --git a/src/content/docs/aws/services/batch.md b/src/content/docs/aws/services/batch.md
index 72a123f7..72523607 100644
--- a/src/content/docs/aws/services/batch.md
+++ b/src/content/docs/aws/services/batch.md
@@ -1,6 +1,5 @@
---
title: Batch
-linkTitle: Batch
description: Get started with Batch on LocalStack
tags: ["Ultimate"]
---
@@ -11,7 +10,7 @@ Batch is a cloud-based service provided by Amazon Web Services (AWS) that simpli
Batch allows you to efficiently process large volumes of data and run batch jobs without the need to manage and provision underlying compute resources.
LocalStack allows you to use the Batch APIs to automate and scale computational tasks in your local environment while handling batch workloads.
-The supported APIs are available on our [API Coverage Page]({{< ref "coverage_batch" >}}), which provides information on the extent of Batch integration with LocalStack.
+The supported APIs are available on our [API Coverage Page](), which provides information on the extent of Batch integration with LocalStack.
## Getting started
@@ -30,15 +29,15 @@ We will demonstrate how you create and run a Batch job by following these steps:
You can create a role using the [`CreateRole`](https://docs.aws.amazon.com/cli/latest/reference/iam/create-role.html) API.
For LocalStack, the service role simply needs to exist.
-However, when [enforcing IAM policies]({{< ref "user-guide/aws/iam#enforcing-iam-policies" >}}), it is necessary that the policy is valid.
+However, when [enforcing IAM policies](/aws/services/iam/#enforcing-iam-policies), it is necessary that the policy is valid.
Run the following command to create a role with an empty policy document:
-{{< command >}}
-$ awslocal iam create-role \
+```bash
+awslocal iam create-role \
--role-name myrole \
--assume-role-policy-document "{}"
-{{< / command >}}
+```
You should see the following output:
@@ -60,12 +59,12 @@ You should see the following output:
You can use the [`CreateComputeEnvironment`](https://docs.aws.amazon.com/cli/latest/reference/batch/create-compute-environment.html) API to create a compute environment.
Run the following command using the role ARN above (`arn:aws:iam::000000000000:role/myrole`), to create the compute environment:
-{{< command >}}
-$ awslocal batch create-compute-environment \
+```bash
+awslocal batch create-compute-environment \
--compute-environment-name myenv \
--type UNMANAGED \
--service-role
-
-
+
The Resource Browser allows you to perform the following actions:
@@ -115,16 +119,16 @@ The Resource Browser allows you to perform the following actions:
The following code snippets and sample applications provide practical examples of how to use CloudFormation in LocalStack for various use cases:
- [Serverless Container-based APIs with Amazon ECS & API Gateway](https://github.com/localstack/serverless-api-ecs-apigateway-sample)
-- [Deploying containers on ECS clusters using ECR and Fargate]({{< ref "/tutorials/ecs-ecr-container-app" >}})
+- [Deploying containers on ECS clusters using ECR and Fargate]()
- [Messaging Processing application with SQS, DynamoDB, and Fargate](https://github.com/localstack/sqs-fargate-ddb-cdk-go)
## Feature coverage
-{{< callout "tip" >}}
+:::note
We are continually enhancing our CloudFormation feature coverage by consistently introducing new resource types.
Your feature requests assist us in determining the priority of resource additions.
Feel free to contribute by [creating a new GitHub issue](https://github.com/localstack/localstack/issues/new?assignees=&labels=feature-request&template=feature-request.yml&title=feature+request%3A+%3Ctitle%3E).
-{{< /callout >}}
+:::
### Features
@@ -145,17 +149,17 @@ Feel free to contribute by [creating a new GitHub issue](https://github.com/loca
| StackSets | Partial |
| Intrinsic Functions | Partial |
-{{< callout >}}
+:::note
Currently, support for `UPDATE` operations on resources is limited.
Prefer stack re-creation over stack update at this time.
-{{< /callout >}}
+:::
-{{< callout >}}
+:::note
Currently, support for `NoEcho` parameters is limited.
Parameters will be masked only in the `Parameters` section of responses to `DescribeStacks` and `DescribeChangeSets` requests.
This might expose sensitive information.
Please exercise caution when using parameters with `NoEcho`.
-{{< /callout >}}
+:::
### Intrinsic Functions
@@ -179,9 +183,9 @@ Please exercise caution when using parameters with `NoEcho`.
### Resources
-{{< callout >}}
+:::note
When utilizing the Community image, any resources within the stack that are not supported will be disregarded and won't be deployed.
-{{< /callout >}}
+:::
#### Community image
diff --git a/src/content/docs/aws/services/cloudfront.md b/src/content/docs/aws/services/cloudfront.md
index 74adac0f..2ac02e1e 100644
--- a/src/content/docs/aws/services/cloudfront.md
+++ b/src/content/docs/aws/services/cloudfront.md
@@ -1,6 +1,5 @@
---
title: "CloudFront"
-linkTitle: "CloudFront"
description: Get started with CloudFront on LocalStack
tags: ["Base"]
persistence: supported
@@ -13,7 +12,7 @@ CloudFront distributes its web content, videos, applications, and APIs with low
CloudFront APIs allow you to configure distributions, customize cache behavior, secure content with access controls, and monitor the CDN's performance through real-time metrics.
LocalStack allows you to use the CloudFront APIs in your local environment to create local CloudFront distributions to transparently access your applications and file artifacts.
-The supported APIs are available on our [API coverage page]({{< ref "coverage_cloudfront" >}}), which provides information on the extent of CloudFront's integration with LocalStack.
+The supported APIs are available on our [API coverage page](), which provides information on the extent of CloudFront's integration with LocalStack.
## Getting started
@@ -25,46 +24,48 @@ We will demonstrate how you can create an S3 bucket, put a text file named `hell
To get started, create an S3 bucket using the `mb` command:
-{{< command >}}
-$ awslocal s3 mb s3://abc123
-{{< / command >}}
+```bash
+awslocal s3 mb s3://abc123
+```
You can now go ahead, create a new text file named `hello.txt` and upload it to the bucket:
-{{< command >}}
-$ echo 'Hello World' > /tmp/hello.txt
-$ awslocal s3 cp /tmp/hello.txt s3://abc123/hello.txt --acl public-read
-{{< / command >}}
+```bash
+echo 'Hello World' > /tmp/hello.txt
+awslocal s3 cp /tmp/hello.txt s3://abc123/hello.txt --acl public-read
+```
After uploading the file to S3, you can create a CloudFront distribution using the [`CreateDistribution`](https://docs.aws.amazon.com/cloudfront/latest/APIReference/API_CreateDistribution.html) API call.
Run the following command to create a distribution with the default settings:
-{{< command >}}
-$ domain=$(awslocal cloudfront create-distribution \
+```bash
+domain=$(awslocal cloudfront create-distribution \
--origin-domain-name abc123.s3.amazonaws.com | jq -r '.Distribution.DomainName')
-$ curl -k https://$domain/hello.txt
-{{< / command >}}
+curl -k https://$domain/hello.txt
+```
-{{< callout "tip" >}}
+:::note
If you wish to use CloudFront on system host, ensure your local DNS setup is correctly configured.
-Refer to the section on [System DNS configuration]({{< ref "dns-server#system-dns-configuration" >}}) for details.
-{{< /callout >}}
+Refer to the section on [System DNS configuration](/aws/tooling/dns-server#system-dns-configuration) for details.
+:::
In the example provided above, be aware that the final command (`curl https://$domain/hello.txt`) might encounter a temporary failure accompanied by a warning message `Could not resolve host`.
+
This can occur because different operating systems adopt diverse DNS caching strategies, causing a delay in the availability of the CloudFront distribution's DNS name (e.g., `abc123.cloudfront.net`) within the system.
Typically, after a few retries, the command should succeed.
+
It's worth noting that similar behavior can be observed in the actual AWS environment, where CloudFront DNS names may take up to 10-15 minutes to propagate across the network.
## Lambda@Edge
-{{< callout "note">}}
+:::note
We’re introducing an early, incomplete, and experimental feature that emulates AWS CloudFront Lambda@Edge, starting with version 4.3.0.
It enables running Lambda functions at simulated edge locations.
This allows you to locally test and develop request/response modifications, security enhancements and more.
This feature is still under development, and functionality is limited.
-{{< /callout >}}
+:::
You can enable this feature by setting `CLOUDFRONT_LAMBDA_EDGE=1` in your LocalStack configuration.
@@ -77,7 +78,7 @@ You can enable this feature by setting `CLOUDFRONT_LAMBDA_EDGE=1` in your LocalS
### Current limitations
-- The [`UpdateDistribution`](https://docs.aws.amazon.com/cloudfront/latest/APIReference/API_UpdateDistribution.html), [`DeleteDistribution`](https://docs.aws.amazon.com/cloudfront/latest/APIReference/API_DeleteDistribution.html), and [`Persistence Restore`]({{< ref "persistence" >}}) features are not yet supported for Lambda@Edge.
+- The [`UpdateDistribution`](https://docs.aws.amazon.com/cloudfront/latest/APIReference/API_UpdateDistribution.html), [`DeleteDistribution`](https://docs.aws.amazon.com/cloudfront/latest/APIReference/API_DeleteDistribution.html), and [Persistence Restore](/aws/capabilities/state-management/persistence) features are not yet supported for Lambda@Edge.
- The `origin-request` and `origin-response` event types currently trigger for each request because caching is not implemented in CloudFront.
## Using custom URLs
@@ -92,18 +93,16 @@ The format of this structure is similar to the one used in [AWS CloudFront optio
In the given example, two domains are specified as `Aliases` for a distribution.
Please note that a complete configuration would entail additional values relevant to the distribution, which have been omitted here for brevity.
-{{< command >}}
+```bash
--distribution-config {...'Aliases':'{'Quantity':2, 'Items': ['custom.domain.one', 'customDomain.two']}'...}
-{{< / command >}}
+```
## Resource Browser
The LocalStack Web Application provides a Resource Browser for CloudFront, which allows you to view and manage your CloudFront distributions.
You can access the Resource Browser by opening the LocalStack Web Application in your browser, navigating to the **Resource Browser** section, and then clicking on **CloudFront** under the **Analytics** section.
-
-
-
+
-
+
- **Create Log Group**: Create a new log group by specifying the `Log Group Name`, `KMS Key ID`, and `Tags`.
- **Put metric**: Create a new metric by specifying the `Namespace` and `Metric Data`.
diff --git a/src/content/docs/aws/services/cloudwatchlogs.md b/src/content/docs/aws/services/cloudwatchlogs.md
index bc3c23c2..687058ff 100644
--- a/src/content/docs/aws/services/cloudwatchlogs.md
+++ b/src/content/docs/aws/services/cloudwatchlogs.md
@@ -1,11 +1,12 @@
---
title: "CloudWatch Logs"
-linkTitle: "CloudWatch Logs"
description: Get started with AWS CloudWatch Logs on LocalStack
tags: ["Free"]
persistence: supported
---
+## Introduction
+
[CloudWatch Logs](https://docs.aws.amazon.com/cloudwatch/index.html) allows to store and retrieve logs.
While some services automatically create and write logs (e.g. Lambda), logs can also be added manually.
CloudWatch Logs is available in the Community version.
@@ -23,37 +24,40 @@ In the following we setup a little example on how to use subscription filters wi
First, we setup the required resources.
Therefore, we create a kinesis stream, a log group and log stream.
Then we can configure the subscription filter.
-{{< command >}}
-$ awslocal kinesis create-stream --stream-name "logtest" --shard-count 1
-$ kinesis_arn=$(awslocal kinesis describe-stream --stream-name "logtest" | jq -r .StreamDescription.StreamARN)
-$ awslocal logs create-log-group --log-group-name test
+```bash
+awslocal kinesis create-stream --stream-name "logtest" --shard-count 1
+kinesis_arn=$(awslocal kinesis describe-stream --stream-name "logtest" | jq -r .StreamDescription.StreamARN)
+
+awslocal logs create-log-group --log-group-name test
-$ awslocal logs create-log-stream \
- --log-group-name test \
- --log-stream-name test
+awslocal logs create-log-stream \
+ --log-group-name test \
+ --log-stream-name test
-$ awslocal logs put-subscription-filter \
+awslocal logs put-subscription-filter \
--log-group-name "test" \
--filter-name "kinesis_test" \
--filter-pattern "" \
--destination-arn $kinesis_arn \
--role-arn "arn:aws:iam::000000000000:role/kinesis_role"
-{{< / command >}}
+```
+
+Next, we can add a log event, that will be forwarded to Kinesis.
-Next, we can add a log event, that will be forwarded to kinesis.
-{{< command >}}
-$ timestamp=$(($(date +'%s * 1000 + %-N / 1000000')))
-$ awslocal logs put-log-events --log-group-name test --log-stream-name test --log-events "[{\"timestamp\": ${timestamp} , \"message\": \"hello from cloudwatch\"}]"
-{{< / command >}}
+```bash
+timestamp=$(($(date +'%s * 1000 + %-N / 1000000')))
+awslocal logs put-log-events --log-group-name test --log-stream-name test --log-events "[{\"timestamp\": ${timestamp} , \"message\": \"hello from cloudwatch\"}]"
+```
Now we can retrieve the data.
In our example, there will only be one record.
The data record is base64 encoded and compressed in gzip format:
-{{< command >}}
-$ shard_iterator=$(awslocal kinesis get-shard-iterator --stream-name logtest --shard-id shardId-000000000000 --shard-iterator-type TRIM_HORIZON | jq -r .ShardIterator)
-$ record=$(awslocal kinesis get-records --limit 10 --shard-iterator $shard_iterator | jq -r '.Records[0].Data')
-$ echo $record | base64 -d | zcat
-{{< / command >}}
+
+```bash
+shard_iterator=$(awslocal kinesis get-shard-iterator --stream-name logtest --shard-id shardId-000000000000 --shard-iterator-type TRIM_HORIZON | jq -r .ShardIterator)
+record=$(awslocal kinesis get-records --limit 10 --shard-iterator $shard_iterator | jq -r '.Records[0].Data')
+echo $record | base64 -d | zcat
+```
## Filter Pattern (Pro only)
@@ -66,39 +70,42 @@ LocalStack currently supports simple json-property filter.
Metric filters can be used to automatically create CloudWatch metrics.
In the following example we are interested in logs that include a key-value pair `"foo": "bar"` and create a metric filter.
-{{< command >}}
-$ awslocal logs create-log-group --log-group-name test-filter
-$ awslocal logs create-log-stream \
- --log-group-name test-filter \
- --log-stream-name test-filter-stream
+```bash
+awslocal logs create-log-group --log-group-name test-filter
+
+awslocal logs create-log-stream \
+--log-group-name test-filter \
+--log-stream-name test-filter-stream
-$ awslocal logs put-metric-filter \
+awslocal logs put-metric-filter \
--log-group-name test-filter \
--filter-name my-filter \
--filter-pattern "{$.foo = \"bar\"}" \
--metric-transformations \
metricName=MyMetric,metricNamespace=MyNamespace,metricValue=1,defaultValue=0
-{{< / command >}}
+```
Next, we can insert some values:
-{{< command >}}
-$ timestamp=$(($(date +'%s * 1000 + %-N / 1000000')))
-$ awslocal logs put-log-events --log-group-name test-filter \
+
+```bash
+timestamp=$(($(date +'%s * 1000 + %-N / 1000000')))
+awslocal logs put-log-events --log-group-name test-filter \
--log-stream-name test-filter-stream \
--log-events \
timestamp=$timestamp,message='"{\"foo\":\"bar\", \"hello\": \"world\"}"' \
timestamp=$timestamp,message="my test event" \
timestamp=$timestamp,message='"{\"foo\":\"nomatch\"}"'
-{{< / command >}}
+```
Now we can check that the metric was indeed created:
-{{< command >}}
+
+```bash
end=$(date +%s)
awslocal cloudwatch get-metric-statistics --namespace MyNamespace \
--metric-name MyMetric --statistics Sum --period 3600 \
--start-time 1659621274 --end-time $end
-{{< / command >}}
+```
### Filter Log Events
@@ -108,9 +115,10 @@ Similarly, you can use filter-pattern to filter logs with different kinds of pat
For purely JSON structured log messages, you can use JSON filter patterns to traverse the JSON object.
Enclose your pattern in curly braces, like this:
-{{< command >}}
-$ awslocal logs filter-log-events --log-group-name test-filter --filter-pattern "{$.foo = \"bar\"}"
-{{< / command >}}
+
+```bash
+awslocal logs filter-log-events --log-group-name test-filter --filter-pattern "{$.foo = \"bar\"}"
+```
This returns all events whose top level "foo" key has the "bar" value.
@@ -118,27 +126,28 @@ This returns all events whose top level "foo" key has the "bar" value.
You can use a simplified regex syntax for regular expression matching.
Enclose your pattern in percentage signs like this:
-{{< command >}}
-$ awslocal logs filter-log-events --log-group-name test-filter --filter-pattern "\%[fF]oo\%"
-{{< / command >}}
+
+```bash
+awslocal logs filter-log-events --log-group-name test-filter --filter-pattern "\%[fF]oo\%"
+```
+
This returns all events containing "Foo" or "foo".
For a complete set of the supported syntax, check [the official AWS documentation](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/FilterAndPatternSyntax.html#regex-expressions)
#### Unstructured Filter Pattern
If not specified otherwise in the pattern, we look for a match in the whole event message:
-{{< command >}}
-$ awslocal logs filter-log-events --log-group-name test-filter --filter-pattern "foo"
-{{< / command >}}
+
+```bash
+awslocal logs filter-log-events --log-group-name test-filter --filter-pattern "foo"
+```
## Resource Browser
The LocalStack Web Application provides a Resource Browser for exploring CloudWatch Logs.
You can access the Resource Browser by opening the LocalStack Web Application in your browser, navigating to the **Resources** section, and then clicking on **CloudWatch Logs** under the **Management/Governance** section.
-
-
+
The Resource Browser allows you to perform the following actions:
diff --git a/src/content/docs/aws/services/codedeploy.md b/src/content/docs/aws/services/codedeploy.md
index aa26ce03..4c4474aa 100644
--- a/src/content/docs/aws/services/codedeploy.md
+++ b/src/content/docs/aws/services/codedeploy.md
@@ -1,8 +1,6 @@
---
title: CodeDeploy
-linkTitle: CodeDeploy
-description: >
- Get started with CodeDeploy on LocalStack
+description: Get started with CodeDeploy on LocalStack
tags: ["Ultimate"]
---
@@ -13,7 +11,7 @@ On AWS, it supports deployments to Amazon EC2 instances, on-premises instances,
Furthermore, based on the target it is also possible to use an in-place deployment or a blue/green deployment.
LocalStack supports a mocking of CodeDeploy API operations.
-The supported operations are listed on the [API coverage page]({{< ref "coverage_codedeploy" >}}).
+The supported operations are listed on the [API coverage page]().
## Getting Started
@@ -28,20 +26,27 @@ Start LocalStack using your preferred method.
An application is a CodeDeploy construct that uniquely identifies your targetted application.
Create an application with the [CreateApplication](https://docs.aws.amazon.com/codedeploy/latest/APIReference/API_CreateApplication.html) operation:
-{{< command >}}
-$ awslocal deploy create-application --application-name hello --compute-platform Server
-`.
For instance, the redirect URL might look like `http://example.com?code=test123`.
@@ -320,13 +344,18 @@ To obtain a token, you need to submit the received code using `grant_type=author
Note that the value of the `redirect_uri` parameter in your token request must match the value provided during the login process.
Ensuring this match is crucial for the proper functioning of the authentication flow.
-```sh
-% curl \
+```bash
+curl \
--data-urlencode 'grant_type=authorization_code' \
--data-urlencode 'redirect_uri=http://example.com' \
--data-urlencode "client_id=${client_id}" \
--data-urlencode 'code=test123' \
'http://localhost:4566/_aws/cognito-idp/oauth2/token'
+```
+
+The output will be similar to the following:
+
+```json
{"access_token": "eyJ0eXAi…lKaHx44Q", "expires_in": 86400, "token_type": "Bearer", "refresh_token": "e3f08304", "id_token": "eyJ0eXAi…ADTXv5mA"}
```
@@ -338,13 +367,13 @@ The client credentials grant allows for scope-based authorization from a non-int
Your app can directly request client credentials from the token endpoint to receive an access token.
To request the token from the LocalStack URL, use the following endpoint: `://cognito-idp.localhost.localstack.cloud:4566/_aws/cognito-idp/oauth2/token`.
-For additional information on our endpoints, refer to our [Internal Endpoints]({{< ref "/references/internal-endpoints" >}}) documentation.
+For additional information on our endpoints, refer to our [Internal Endpoints]() documentation.
If there are multiple user pools, LocalStack identifies the appropriate one by examining the `clientid` of the request.
To get started, follow the example below:
-```sh
+```bash
#Create client user pool with a client.
export client_id=$(awslocal cognito-idp create-user-pool-client --user-pool-id $pool_id --client-name test-client --generate-secret | jq -rc ".UserPoolClient.ClientId")
@@ -449,7 +478,7 @@ Authentication: AWS4-HMAC-SHA256 Credential=test-1234567/20190821/us-east-1/cogn
The LocalStack Web Application provides a Resource Browser for managing Cognito User Pools, and more.
You can access the Resource Browser by opening the LocalStack Web Application in your browser, navigating to the **Resources** section, and then clicking on **Cognito** under the **Security Identity Compliance** section.
-
+
The Resource Browser allows you to perform the following actions:
@@ -472,4 +501,4 @@ The following code snippets and sample applications provide practical examples o
By default, LocalStack's Cognito does not send actual email messages.
However, if you wish to enable this feature, you will need to provide an email address and configure the corresponding SMTP settings.
-The instructions on configuring the connection parameters of your SMTP server can be found in the [Configuration]({{< ref "configuration#emails" >}}) guide to allow your local Cognito environment to send email notifications.
+The instructions on configuring the connection parameters of your SMTP server can be found in the [Configuration](/aws/capabilities/config/configuration/#emails) guide to allow your local Cognito environment to send email notifications.
diff --git a/src/content/docs/aws/services/config.md b/src/content/docs/aws/services/config.md
index 9c28d46d..77d3006b 100644
--- a/src/content/docs/aws/services/config.md
+++ b/src/content/docs/aws/services/config.md
@@ -1,6 +1,5 @@
---
title: "Config"
-linkTitle: "Config"
description: Get started with Config on LocalStack
persistence: supported
tags: ["Free"]
@@ -13,7 +12,7 @@ Config provides a comprehensive view of the resource configuration across your A
Config continuously records configuration changes and allows you to retain a historical record of these changes.
LocalStack allows you to use the Config APIs in your local environment to assesses resource configurations and notifies you of any non-compliant items to mitigate potential security risks.
-The supported APIs are available on our [API coverage page]({{< ref "coverage_config" >}}), which provides information on the extent of Config's integration with LocalStack.
+The supported APIs are available on our [API coverage page](), which provides information on the extent of Config's integration with LocalStack.
## Getting started
@@ -28,20 +27,20 @@ The S3 bucket will be used to receive a configuration snapshot on request and co
The SNS topic will be used to notify you when a configuration snapshot is available.
You can create a new S3 bucket and SNS topic using the AWS CLI:
-{{< command >}}
-$ awslocal s3 mb s3://config-test
-$ awslocal sns create-topic --name config-test-topic
-{{< /command >}}
+```bash
+awslocal s3 mb s3://config-test
+awslocal sns create-topic --name config-test-topic
+```
### Create a new configuration recorder
You can now create a new configuration recorder to record configuration changes for specified resource types, using the [`PutConfigurationRecorder`](https://docs.aws.amazon.com/config/latest/APIReference/API_PutConfigurationRecorder.html) API.
Run the following command to create a new configuration recorder:
-{{< command >}}
-$ awslocal configservice put-configuration-recorder \
+```bash
+awslocal configservice put-configuration-recorder \
--configuration-recorder name=default,roleARN=arn:aws:iam::000000000000:role/config-role
-{{< /command >}}
+```
We have specified the `roleARN` parameter to grant the configuration recorder the needful permissions to access the S3 bucket and SNS topic.
In LocalStack, IAM roles are not enforced, so you can specify any role ARN you like.
@@ -68,8 +67,8 @@ You can inline the JSON into the `awslocal` command.
Run the following command to create the delivery channel:
-{{< command >}}
-$ awslocal configservice put-delivery-channel \
+```bash
+awslocal configservice put-delivery-channel \
--delivery-channel '{
"name": "default",
"s3BucketName": "config-test",
@@ -78,7 +77,7 @@ $ awslocal configservice put-delivery-channel \
"deliveryFrequency": "Twelve_Hours"
}
}'
-{{< /command >}}
+```
### Start the configuration recorder
@@ -86,17 +85,17 @@ You can now start recording configurations of the local AWS resources you have s
You can use the [`StartConfigurationRecorder`](https://docs.aws.amazon.com/config/latest/APIReference/API_StartConfigurationRecorder.html) API to start the configuration recorder.
Run the following command to start the configuration recorder:
-{{< command >}}
-$ awslocal configservice start-configuration-recorder \
+```bash
+awslocal configservice start-configuration-recorder \
--configuration-recorder-name default
-{{< /command >}}
+```
You can list the delivery channels and configuration recorders using the [`DescribeDeliveryChannels`](https://docs.aws.amazon.com/config/latest/APIReference/API_DescribeDeliveryChannels.html) and [`DescribeConfigurationRecorderStatus`](https://docs.aws.amazon.com/config/latest/APIReference/API_DescribeConfigurationRecorderStatus.html) APIs respectively.
-{{< command >}}
-$ awslocal configservice describe-delivery-channels
-$ awslocal configservice describe-configuration-recorder-status
-{{< /command >}}
+```bash
+awslocal configservice describe-delivery-channels
+awslocal configservice describe-configuration-recorder-status
+```
## Current Limitations
diff --git a/src/content/docs/aws/services/dms.md b/src/content/docs/aws/services/dms.md
index b2e26fe9..ef9e8398 100644
--- a/src/content/docs/aws/services/dms.md
+++ b/src/content/docs/aws/services/dms.md
@@ -1,6 +1,5 @@
---
title: "Database Migration Service (DMS)"
-linkTitle: "Database Migration Service (DMS)"
description: Get started with Database Migration Service (DMS) on LocalStack
tags: ["Ultimate"]
---
@@ -11,12 +10,12 @@ AWS Database Migration Service provides migration solution from databases, data
The migration can be homogeneous (source and target have the same type), but often times is heterogeneous as it supports migration from various sources to various targets (self-hosted and AWS services).
LocalStack only supports selected use cases for DMS at the moment.
-The supported APIs are available on our [API coverage page]({{< ref "coverage_dms" >}}), which provides information on the extent of DMS integration with LocalStack.
+The supported APIs are available on our [API coverage page](), which provides information on the extent of DMS integration with LocalStack.
-{{< callout "note">}}
+:::note
DMS is in a preview state, supporting only [selected use cases](#supported-use-cases).
You need to set the env `ENABLE_DMS=1` in order to activate it.
-{{< /callout >}}
+:::
## Getting started
@@ -29,13 +28,13 @@ You can run a DMS sample showcasing MariaDB source and Kinesis target from our [
To follow the sample, simply clone the repository:
-```sh
+```bash
git clone https://github.com/localstack-samples/sample-dms-kinesis-rds-mariadb.git
```
Next, start LocalStack (there is a docker-compose included, setting the `ENABLE_DMS=1` flag):
-```sh
+```bash
export LOCALSTACK_AUTH_TOKEN= # this must be a enterprise license token
docker-compose up
```
@@ -53,7 +52,7 @@ make run
You will then see some log output, indicating the status of the ongoing replication:
-```sh
+```bash
************
STARTING FULL LOAD FLOW
************
@@ -147,8 +146,7 @@ The LocalStack Web Application provides a Resource Browser for managing:
You can access the Resource Browser by opening the LocalStack Web Application in your browser, navigating to the **Resources** section, and then clicking on **Database Migration Service** under the **Migration and transfer** section.
-
-
+
The Resource Browser supports CRD (Create, Read, Delete) operations on DMS resources.
@@ -176,7 +174,7 @@ The Resource Browser supports CRD (Create, Read, Delete) operations on DMS resou
For RDS MariaDB and RDS MySQL it is not yet possible to set custom db-parameters.
In order to make those databases work with `cdc` migration for DMS, some default db-parameters are changed upon start if the `ENABLE_DMS=1` flag is set:
-```sh
+```bash
binlog_checksum=NONE
binlog_row_image=FULL
binlog_format=ROW
diff --git a/src/content/docs/aws/services/docdb.md b/src/content/docs/aws/services/docdb.md
index 7f89ade5..eee03d43 100644
--- a/src/content/docs/aws/services/docdb.md
+++ b/src/content/docs/aws/services/docdb.md
@@ -1,6 +1,5 @@
---
title: "DocumentDB (DocDB)"
-linkTitle: "DocumentDB (DocDB)"
tags: ["Ultimate"]
description: Get started with AWS DocumentDB on LocalStack
---
@@ -11,17 +10,21 @@ DocumentDB is a fully managed, non-relational database service that supports Mon
DocumentDB is compatible with MongoDB, meaning you can use the same MongoDB drivers, applications, and tools to run, manage, and scale workloads on DocumentDB without having to worry about managing the underlying infrastructure.
LocalStack allows you to use the DocumentDB APIs to create and manage DocumentDB clusters and instances.
-The supported APIs are available on our [API coverage page]({{< ref "coverage_docdb" >}}), which provides information on the extent of DocumentDB's integration with LocalStack.
+The supported APIs are available on our [API coverage page](), which provides information on the extent of DocumentDB's integration with LocalStack.
## Getting started
To create a new DocumentDB cluster we use the `create-db-cluster` command as follows:
-{{< command >}}
-$ awslocal docdb create-db-cluster --db-cluster-identifier test-docdb-cluster --engine docdb
-{{< /command >}}
+```bash
+awslocal docdb create-db-cluster \
+ --db-cluster-identifier test-docdb-cluster \
+ --engine docdb
+```
+
+The output will be similar to the following:
-```yaml
+```bash
{
"DBCluster": {
"DBClusterIdentifier": "test-docdb-cluster",
@@ -63,12 +66,17 @@ created.
As we did not specify a `MasterUsername` or `MasterUserPassword` for the creation of the database, the mongo-db will not set any credentials when starting the docker container.
To create a new database, we can use the `create-db-instance` command, like in this example:
-{{< command >}}
-$ awslocal docdb create-db-instance --db-instance-identifier test-company \
---db-instance-class db.r5.large --engine docdb --db-cluster-identifier test-docdb-cluster
-{{< /command >}}
+```bash
+awslocal docdb create-db-instance \
+ --db-instance-identifier test-company \
+ --db-instance-class db.r5.large \
+ --engine docdb \
+ --db-cluster-identifier test-docdb-cluster
+```
-```yaml
+The output will be similar to the following:
+
+```bash
{
"DBInstance": {
"DBInstanceIdentifier": "test-docdb-instance",
@@ -114,11 +122,15 @@ Some noticeable fields:
.
To obtain detailed information about the cluster, we use the `describe-db-cluster` command:
-{{< command >}}
-$ awslocal docdb describe-db-clusters --db-cluster-identifier test-docdb-cluster
-{{< /command >}}
-```yaml
+```bash
+awslocal docdb describe-db-clusters \
+ --db-cluster-identifier test-docdb-cluster
+```
+
+The output will be similar to the following:
+
+```bash
{
"DBClusters": [
{
@@ -158,22 +170,19 @@ Interacting with the databases is done using `mongosh`, which is an official com
It is designed to provide a modern and enhanced user experience for interacting with MongoDB
databases.
-{{< command >}}
+```bash
+mongosh mongodb://localhost:39045
+```
-$ mongosh mongodb://localhost:39045
+The output will be similar to the following:
+
+```bash
Current Mongosh Log ID: 64a70b795697bcd4865e1b9a
Connecting to: mongodb://localhost:
39045/?directConnection=true&serverSelectionTimeoutMS=2000&appName=mongosh+1.10.1
Using MongoDB: 6.0.7
Using Mongosh: 1.10.1
-
-For mongosh info see: https://docs.mongodb.com/mongodb-shell/
-
-------
-
-test>
-
-{{< /command >}}
+```
This command will default to accessing the `test` database that was created with the cluster.
Notice the port, `39045`,
@@ -181,27 +190,15 @@ which is the cluster port that appears in the aforementioned description.
To work with a specific database, the command is:
-{{< command >}}
-$ mongosh mongodb://localhost:39045/test-company
-Current Mongosh Log ID: 64a71916fae7fdeeb8b43a73
-Connecting to: mongodb://localhost:
-39045/test-company?directConnection=true&serverSelectionTimeoutMS=2000&appName=mongosh+1.10.1
-Using MongoDB: 6.0.7
-Using Mongosh: 1.10.1
-
-For mongosh info see: https://docs.mongodb.com/mongodb-shell/
-
-------
-test-company>
-
-{{< /command >}}
+```bash
+mongosh mongodb://localhost:39045/test-company
+```
From here on we can manipulate collections
using [the JavaScript methods provided](https://www.mongodb.com/docs/manual/reference/method/)
by `mongosh`:
-{{< command >}}
-
+```bash
test-company> db.createCollection("employees")
{ ok: 1 }
test-company> db.createCollection("customers")
@@ -210,20 +207,19 @@ test-company> show collections
customers
employees
test-company> exit
-
-{{< /command >}}
+```
For more information on how to use MongoDB with `mongosh` please refer to
the [MongoDB documentation](https://www.mongodb.com/docs/).
### Connect to DocumentDB using Node.js Lambda
-{{< callout >}}
+:::note
You need to set `DOCDB_PROXY_CONTAINER=1` when starting LocalStack to be able to use the returned `Endpoint`, which will be correctly resolved automatically.
The flag `DOCDB_PROXY_CONTAINER=1` changes the default behavior and the container will be started as proxied container.
-Meaning a port from the [pre-defined port]({{< ref "/references/external-ports" >}}) range will be chosen, and when using lambda, you can use `localhost.localstack.cloud` to connect to the instance.
-{{< /callout >}}
+Meaning a port from the [pre-defined port]() range will be chosen, and when using lambda, you can use `localhost.localstack.cloud` to connect to the instance.
+:::
In this sample we will use a Node.js lambda function to connect to a DocumentDB.
For the mongo-db connection we will use the `mongodb` lib.
@@ -235,12 +231,14 @@ We included a snippet at the very end.
#### Create the DocDB Cluster with a username and password
We assume you have a `MasterUsername` and `MasterUserPassword` set for DocDB e.g:
-{{< command >}}
-$ awslocal docdb create-db-cluster --db-cluster-identifier test-docdb \
+
+```bash
+awslocal docdb create-db-cluster \
+ --db-cluster-identifier test-docdb \
--engine docdb \
--master-user-password S3cretPwd! \
--master-username someuser
-{{< /command >}}
+```
#### Prepare the lambda function
@@ -248,16 +246,16 @@ First, we create the zip required for the lambda function with the mongodb depen
You will need [`npm`](https://docs.npmjs.com/) in order to install the dependencies.
In your terminal run:
-{{< command >}}
-$ mkdir resources
-$ cd resources
-$ mkdir node_modules
-$ npm install mongodb@6.3.0
-{{< /command >}}
+```bash
+mkdir resources
+cd resources
+mkdir node_modules
+npm install mongodb@6.3.0
+```
Next, copy the following code into a new file named `index.js` in the `resources` folder:
-{{< command >}}
+```javascript
const AWS = require('aws-sdk');
const RDS = AWS.RDS;
const { MongoClient } = require('mongodb');
@@ -305,35 +303,40 @@ exports.handler = async (event) => {
};
}
};
-{{< /command >}}
+```
Now, you can zip the entire.
Make sure you are inside `resources` directory and run:
-{{< command >}}
-$ zip -r function.zip .
-{{< /command >}}
+
+```bash
+zip -r function.zip .
+```
Finally, we can create the `lambda` function using `awslocal`:
-{{< command >}}
-$ awslocal lambda create-function \
+
+```bash
+awslocal lambda create-function \
--function-name MyNodeLambda \
--runtime nodejs16.x \
--role arn:aws:iam::000000000000:role/lambda-role \
--handler index.handler \
--zip-file fileb://function.zip \
--environment Variables="{DOCDB_CLUSTER_ID=test-docdb,DOCDB_SECRET=S3cretPwd!}"
-{{< /command >}}
+```
You can invoke the lambda by calling:
-{{< command >}}
-$ awslocal lambda invoke --function-name MyNodeLambda outfile
-{{< /command >}}
+
+```bash
+awslocal lambda invoke \
+ --function-name MyNodeLambda \
+ outfile
+```
The `outfile` contains the returned value, e.g.:
-```yaml
+```json
{"statusCode":200,"body":"{\"_id\":\"6560a21ca7771a02ef128c72\",\"key\":\"value\"}"}
-````
+```
#### Use Secret To Connect to DocDB
@@ -343,7 +346,7 @@ Secrets follow a [well-defined pattern](https://docs.aws.amazon.com/secretsmanag
For the lambda function, you can pass the secret arn as `SECRET_NAME`.
In the lambda, you can then retrieve the secret details like this:
-{{< command >}}
+```javascript
const AWS = require('aws-sdk');
const { MongoClient } = require('mongodb');
@@ -390,17 +393,14 @@ exports.handler = async (event) => {
};
}
};
-
-{{< /command >}}
+```
## Resource Browser
The LocalStack Web Application provides a Resource Browser for managing DocumentDB instances and clusters.
You can access the Resource Browser by opening the LocalStack Web Application in your browser, navigating to the **Resources** section, and then clicking on **DocumentDB** under the **Database** section.
-
-
-
+
The Resource Browser allows you to perform the following actions:
diff --git a/src/content/docs/aws/services/dynamodb.md b/src/content/docs/aws/services/dynamodb.md
index 82e8be99..aaf5b691 100644
--- a/src/content/docs/aws/services/dynamodb.md
+++ b/src/content/docs/aws/services/dynamodb.md
@@ -1,6 +1,5 @@
---
title: DynamoDB
-linkTitle: DynamoDB
description: Get started with DynamoDB on LocalStack
persistence: supported
tags: ["Free"]
@@ -11,7 +10,7 @@ It offers a flexible and highly scalable way to store and retrieve data, making
DynamoDB provides a fast and scalable key-value datastore with support for replication, automatic scaling, data encryption at rest, and on-demand backup, among other capabilities.
LocalStack allows you to use the DynamoDB APIs in your local environment to manage key-value and document data models.
-The supported APIs are available on our [API coverage page]({{< ref "coverage_dynamodb" >}}), which provides information on the extent of DynamoDB's integration with LocalStack.
+The supported APIs are available on our [API coverage page](), which provides information on the extent of DynamoDB's integration with LocalStack.
DynamoDB emulation is powered by [DynamoDB Local](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/DynamoDBLocal.html).
@@ -27,14 +26,14 @@ We will demonstrate how to create DynamoDB table, along with its replicas, and p
You can create a DynamoDB table using the [`CreateTable`](https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_CreateTable.html) API.
Execute the following command to create a table named `global01` with a primary key `id`:
-{{< command >}}
-$ awslocal dynamodb create-table \
+```bash
+awslocal dynamodb create-table \
--table-name global01 \
--key-schema AttributeName=id,KeyType=HASH \
--attribute-definitions AttributeName=id,AttributeType=S \
--billing-mode PAY_PER_REQUEST \
--region ap-south-1
-{{< /command >}}
+```
The following output would be retrieved:
@@ -70,12 +69,12 @@ The following output would be retrieved:
You can create replicas of a DynamoDB table using the [`UpdateTable`](https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_UpdateTable.html) API.
Execute the following command to create replicas in `ap-south-1` and `us-west-1` regions:
-{{< command >}}
-$ awslocal dynamodb update-table \
+```bash
+awslocal dynamodb update-table \
--table-name global01 \
--replica-updates '[{"Create": {"RegionName": "eu-central-1"}}, {"Create": {"RegionName": "us-west-1"}}]' \
--region ap-south-1
-{{< /command >}}
+```
The following output would be retrieved:
@@ -107,10 +106,10 @@ You can now operate on the table in the replicated regions as well.
You can use the [`ListTables`](https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_ListTables.html) API to list the tables in the replicated regions.
Run the following command to list the tables in the `eu-central-1` region:
-{{< command >}}
-$ awslocal dynamodb list-tables \
+```bash
+awslocal dynamodb list-tables \
--region eu-central-1
-{{< /command >}}
+```
The following output would be retrieved:
@@ -127,22 +126,22 @@ The following output would be retrieved:
You can insert an item into a DynamoDB table using the [`PutItem`](https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_PutItem.html) API.
Execute the following command to insert an item into the `global01` table:
-{{< command >}}
-$ awslocal dynamodb put-item \
+```bash
+awslocal dynamodb put-item \
--table-name global01 \
--item '{"id":{"S":"foo"}}' \
--region eu-central-1
-{{< /command >}}
+```
You can now query the number of items in the table using the [`DescribeTable`](https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_DescribeTable.html) API.
Run the following command to query the number of items in the `global01` table from a different region:
-{{< command >}}
-$ awslocal dynamodb describe-table \
+```bash
+awslocal dynamodb describe-table \
--table-name global01 \
--query 'Table.ItemCount' \
--region ap-south-1
-{{< /command >}}
+```
The following output would be retrieved:
@@ -150,11 +149,11 @@ The following output would be retrieved:
1
```
-{{< callout >}}
+:::note
You can run DynamoDB in memory, which can greatly improve the performance of your database operations.
However, this also means that the data will not be possible to persist on disk and will be lost even though persistence is enabled in LocalStack.
To enable this feature, you need to set the environment variable `DYNAMODB_IN_MEMORY=1` while starting LocalStack.
-{{< /callout >}}
+:::
### Time To Live
@@ -167,8 +166,13 @@ In addition, to programmatically trigger the worker at convenience, we provide t
The response returns the number of deleted items:
-```console
+```bash
curl -X DELETE localhost:4566/_aws/dynamodb/expired
+```
+
+The output will be:
+
+```json
{"ExpiredItems": 3}
```
@@ -177,7 +181,7 @@ curl -X DELETE localhost:4566/_aws/dynamodb/expired
The LocalStack Web Application provides a Resource Browser for managing DynamoDB tables and items.
You can access the Resource Browser by opening the LocalStack Web Application in your browser, navigating to the **Resources** section, and then clicking on **DynamoDB** under the **Database** section.
-
+
The Resource Browser allows you to perform the following actions:
diff --git a/src/content/docs/aws/services/dynamodbstreams.md b/src/content/docs/aws/services/dynamodbstreams.md
index 89bd856c..81924e64 100644
--- a/src/content/docs/aws/services/dynamodbstreams.md
+++ b/src/content/docs/aws/services/dynamodbstreams.md
@@ -11,7 +11,7 @@ The stream records are written to a DynamoDB stream, which is an ordered flow of
DynamoDB Streams records data in near-real time, enabling you to develop workflows that process these streams and respond based on their contents.
LocalStack supports DynamoDB Streams, allowing you to create and manage streams in a local environment.
-The supported APIs are available on our [DynamoDB Streams coverage page]({{< ref "coverage_dynamodbstreams" >}}), which provides information on the extent of DynamoDB Streams integration with LocalStack.
+The supported APIs are available on our [DynamoDB Streams coverage page](), which provides information on the extent of DynamoDB Streams integration with LocalStack.
## Getting started
@@ -30,14 +30,14 @@ We will demonstrate the following process using LocalStack:
You can create a DynamoDB table named `BarkTable` using the [`CreateTable`](https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_CreateTable.html) API.
Run the following command to create the table:
-{{< command >}}
-$ awslocal dynamodb create-table \
+```bash
+awslocal dynamodb create-table \
--table-name BarkTable \
--attribute-definitions AttributeName=Username,AttributeType=S AttributeName=Timestamp,AttributeType=S \
--key-schema AttributeName=Username,KeyType=HASH AttributeName=Timestamp,KeyType=RANGE \
--provisioned-throughput ReadCapacityUnits=5,WriteCapacityUnits=5 \
--stream-specification StreamEnabled=true,StreamViewType=NEW_AND_OLD_IMAGES
-{{< /command >}}
+```
The `BarkTable` has a stream enabled which you can trigger by associating a Lambda function with the stream.
You can notice that in the `LatestStreamArn` field of the response:
@@ -79,9 +79,9 @@ exports.handler = (event, context, callback) => {
You can now create a Lambda function using the [`CreateFunction`](https://docs.aws.amazon.com/lambda/latest/dg/API_CreateFunction.html) API.
Run the following command to create the Lambda function:
-{{< command >}}
-$ zip index.zip index.js
-$ awslocal lambda create-function \
+```bash
+zip index.zip index.js
+awslocal lambda create-function \
--function-name publishNewBark \
--zip-file fileb://index.zip \
--role roleARN \
@@ -89,7 +89,7 @@ $ awslocal lambda create-function \
--timeout 50 \
--runtime nodejs16.x \
--role arn:aws:iam::000000000000:role/lambda-role
-{{< /command >}}
+```
### Invoke the Lambda function
@@ -138,12 +138,12 @@ Create a new file named `payload.json` with the following content:
Run the following command to invoke the Lambda function:
-{{< command >}}
-$ awslocal lambda invoke \
+```bash
+awslocal lambda invoke \
--function-name publishNewBark \
--payload file://payload.json \
--cli-binary-format raw-in-base64-out output.txt
-{{< /command >}}
+```
In the `output.txt` file, you should see the following output:
@@ -157,20 +157,20 @@ To add the DynamoDB stream as an event source for the Lambda function, you need
You can get the stream ARN using the [`DescribeTable`](https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_DescribeTable.html) API.
Run the following command to get the stream ARN:
-{{< command >}}
+```bash
awslocal dynamodb describe-table --table-name BarkTable
-{{< /command >}}
+```
You can now create an event source mapping using the [`CreateEventSourceMapping`](https://docs.aws.amazon.com/lambda/latest/dg/API_CreateEventSourceMapping.html) API.
Run the following command to create the event source mapping:
-{{< command >}}
+```bash
awslocal lambda create-event-source-mapping \
--function-name publishNewBark \
--event-source arn:aws:dynamodb:us-east-1:000000000000:table/BarkTable/stream/2024-07-12T06:18:37.101 \
--batch-size 1 \
--starting-position TRIM_HORIZON
-{{< /command >}}
+```
Make sure to replace the `event-source` value with the stream ARN you obtained from the previous command.
You should see the following output:
@@ -189,11 +189,11 @@ You should see the following output:
You can now test the event source mapping by adding an item to the `BarkTable` table using the [`PutItem`](https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_PutItem.html) API.
Run the following command to add an item to the table:
-{{< command >}}
-$ awslocal dynamodb put-item \
+```bash
+awslocal dynamodb put-item \
--table-name BarkTable \
--item Username={S="Jane Doe"},Timestamp={S="2016-11-18:14:32:17"},Message={S="Testing...1...2...3"}
-{{< /command >}}
+```
You can find Lambda function being triggered in the LocalStack logs.
@@ -202,9 +202,9 @@ You can find Lambda function being triggered in the LocalStack logs.
You can list the streams using the [`ListStreams`](https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_ListStreams.html) API.
Run the following command to list the streams:
-{{< command >}}
+```bash
awslocal dynamodbstreams list-streams
-{{< /command >}}
+```
The following output shows the list of streams:
@@ -223,8 +223,9 @@ The following output shows the list of streams:
You can also describe the stream using the [`DescribeStream`](https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_DescribeStream.html) API.
Run the following command to describe the stream:
-{{< command >}}
-$ awslocal dynamodbstreams describe-stream --stream-arn arn:aws:dynamodb:us-east-1:000000000000:table/BarkTable/stream/2024-07-12T06:18:37.101
-{{< /command >}}
+```bash
+awslocal dynamodbstreams describe-stream \
+ --stream-arn arn:aws:dynamodb:us-east-1:000000000000:table/BarkTable/stream/2024-07-12T06:18:37.101
+```
Replace the `stream-arn` value with the stream ARN you obtained from the previous command.
diff --git a/src/content/docs/aws/services/ec2.md b/src/content/docs/aws/services/ec2.mdx
similarity index 82%
rename from src/content/docs/aws/services/ec2.md
rename to src/content/docs/aws/services/ec2.mdx
index fc77a441..082e5878 100644
--- a/src/content/docs/aws/services/ec2.md
+++ b/src/content/docs/aws/services/ec2.mdx
@@ -1,18 +1,19 @@
---
title: "Elastic Compute Cloud (EC2)"
-linkTitle: "Elastic Compute Cloud (EC2)"
tags: ["Free"]
description: Get started with Amazon Elastic Compute Cloud (EC2) on LocalStack
persistence: supported with limitations
---
+import { Tabs, TabItem } from '@astrojs/starlight/components';
+
## Introduction
Elastic Compute Cloud (EC2) is a core service within Amazon Web Services (AWS) that provides scalable and flexible virtual computing resources.
EC2 enables users to launch and manage virtual machines, referred to as instances.
LocalStack allows you to use the EC2 APIs in your local environment to create and manage EC2 instances and related resources such as VPCs, EBS volumes, etc.
-The list of supported APIs can be found on the [API coverage page]({{< ref "coverage_ec2" >}}).
+The list of supported APIs can be found on the [API coverage page]().
## Getting started
@@ -29,61 +30,51 @@ Key pairs are SSH public key/private key combinations that are used to log in to
To create a key pair, you can use the [`CreateKeyPair`](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_CreateKeyPair.html) API.
Run the following command to create the key pair and pipe the output to a file named `key.pem`:
-{{< command >}}
-$ awslocal ec2 create-key-pair \
+```bash
+awslocal ec2 create-key-pair \
--key-name my-key \
--query 'KeyMaterial' \
--output text | tee key.pem
-{{< /command >}}
+```
You may need to assign necessary permissions to the key files for security reasons.
This can be done using the following commands:
-{{< tabpane text=true >}}
-
-{{< tab header="**Linux**" >}}
-
-{{< command >}}
-$ chmod 400 key.pem
-{{< /command >}}
-
-{{< /tab >}}
-
-{{< tab header="**Windows (Powershell)**" >}}
-
-{{< command >}}
+
+
+```bash
+chmod 400 key.pem
+```
+
+
+```bash
$acl = Get-Acl -Path "key.pem"
$fileSystemAccessRule = New-Object System.Security.AccessControl.FileSystemAccessRule("$env:username", "Read", "Allow")
$acl.SetAccessRule($fileSystemAccessRule)
$acl.SetAccessRuleProtection($true, $false)
Set-Acl -Path "key.pem" -AclObject $acl
-{{< /command >}}
-
-{{< /tab >}}
-
-{{< tab header="**Windows (Command Prompt)**" >}}
-
-{{< command >}}
+```
+
+
+```bash
icacls.exe key.pem /reset
icacls.exe key.pem /grant:r "$($env:username):(r)"
icacls.exe key.pem /inheritance:r
-{{< /command >}}
-
-{{< /tab >}}
-
-{{< /tabpane >}}
+```
+
+
If you already have an SSH public key that you wish to use, such as the one located in your home directory at `~/.ssh/id_rsa.pub`, you can import it instead.
-{{< command >}}
+```bash
$ awslocal ec2 import-key-pair --key-name my-key --public-key-material file://~/.ssh/id_rsa.pub
-{{< /command >}}
+```
If you only have the SSH private key, a public key can be generated using the following command, and then imported:
-{{< command >}}
-$ ssh-keygen -y -f id_rsa > id_rsa.pub
-{{< /command >}}
+```bash
+ssh-keygen -y -f id_rsa > id_rsa.pub
+```
### Add rules to your security group
@@ -91,13 +82,13 @@ Currently, LocalStack only supports the `default` security group.
You can add rules to the security group using the [`AuthorizeSecurityGroupIngress`](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_AuthorizeSecurityGroupIngress.html) API.
Run the following command to add a rule to allow inbound traffic on port 8000:
-{{< command >}}
-$ awslocal ec2 authorize-security-group-ingress \
+```bash
+awslocal ec2 authorize-security-group-ingress \
--group-id default \
--protocol tcp \
--port 8000 \
--cidr 0.0.0.0/0
-{{< /command >}}
+```
The above command will enable rules in the security group to allow incoming traffic from your local machine on port 8000 of an emulated EC2 instance.
@@ -106,9 +97,9 @@ The above command will enable rules in the security group to allow incoming traf
You can fetch the Security Group ID using the [`DescribeSecurityGroups`](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeSecurityGroups.html) API.
Run the following command to fetch the Security Group ID:
-{{< command >}}
-$ awslocal ec2 describe-security-groups
-{{< /command >}}
+```bash
+awslocal ec2 describe-security-groups
+```
You should see the following output:
@@ -140,24 +131,24 @@ python3 -m http.server 8000
You can now run an EC2 instance using the [`RunInstances`](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_RunInstances.html) API.
Run the following command to run an EC2 instance by adding the appropriate Security Group ID that we fetched in the previous step:
-{{< command >}}
-$ awslocal ec2 run-instances \
+```bash
+awslocal ec2 run-instances \
--image-id ami-df5de72bdb3b \
--count 1 \
--instance-type t3.nano \
--key-name my-key \
--security-group-ids '' \
--user-data file://./user_script.sh
-{{< /command >}}
+```
### Test the Python web server
You can now open the LocalStack logs to find the IP address of the locally emulated EC2 instance.
Run the following command to open the LocalStack logs:
-{{< command >}}
-$ localstack logs
-{{< /command >}}
+```bash
+localstack logs
+```
You should see the following output:
@@ -169,11 +160,11 @@ You should see the following output:
You can now use the IP address to test the Python Web Server.
Run the following command to test the Python Web Server:
-{{< command >}}
-$ curl 172.17.0.4:8000
+```bash
+curl 172.17.0.4:8000
# Or, you can run
-$ curl 127.0.0.1:29043
-{{< /command >}}
+curl 127.0.0.1:29043
+```
You should see the following output:
@@ -186,10 +177,10 @@ You should see the following output:
...
```
-{{< callout "note" >}}
+:::note
Similar to the setup in production AWS, the user data content is stored at `/var/lib/cloud/instances//` within the instance.
Any execution of this data is recorded in the `/var/log/cloud-init-output.log` file.
-{{< /callout >}}
+:::
### Connecting via SSH
@@ -198,20 +189,20 @@ You can also set up an SSH connection to the locally emulated EC2 instance using
This section assumes that you have created or imported an SSH key pair named `my-key`.
When running the EC2 instance, make sure to pass the `--key-name` parameter to the command:
-{{< command >}}
-$ awslocal ec2 run-instances --key-name my-key ...
-{{< /command >}}
+```bash
+awslocal ec2 run-instances --key-name my-key ...
+```
Once the instance is up and running, we can use the `ssh` command to set up an SSH connection.
Assuming the instance is available under `127.0.0.1:12862` (as per the LocalStack log output), use this command:
-{{< command >}}
-$ ssh -p 12862 -i key.pem root@127.0.0.1
-{{< /command >}}
+```bash
+ssh -p 12862 -i key.pem root@127.0.0.1
+```
-{{< callout "tip" >}}
+:::note
If the `ssh` command throws an error like "Identity file not accessible" or "bad permissions", make sure that the key file has a restrictive `0400` permission as illustrated above.
-{{< /callout >}}
+:::
## VM Managers
@@ -219,7 +210,7 @@ LocalStack EC2 supports multiple methods to simulate the EC2 service.
All tiers support the mock/CRUD capability.
For advanced setups, LocalStack Pro comes with emulation capability for certain resource types so that they behave more closely like AWS.
-The underlying method for this can be controlled using the [`EC2_VM_MANAGER`]({{< ref "configuration#ec2" >}}) configuration option.
+The underlying method for this can be controlled using the [`EC2_VM_MANAGER`](/aws/capabilities/config/configuration#ec2) configuration option.
You may choose between plain mocked resources, containerized or virtualized.
## Mock VM Manager
@@ -228,7 +219,7 @@ With the Mock VM manager, all resources are stored as in-memory representation.
This only offers the CRUD capability.
This is the default VM manager in LocalStack Community edition.
-To use this VM manager in LocalStack Pro, set [`EC2_VM_MANAGER`]({{< ref "configuration#ec2" >}}) to `mock`.
+To use this VM manager in LocalStack Pro, set [`EC2_VM_MANAGER`](/aws/capabilities/config/configuration#ec2) to `mock`.
This serves as the fallback manager if an operation is not implemented in other VM managers.
@@ -238,7 +229,7 @@ LocalStack Pro supports the Docker VM manager which uses the [Docker Engine](htt
This VM manager requires the Docker socket from the host machine to be mounted inside the LocalStack container at `/var/run/docker.sock`.
This is the default VM manager in LocalStack Pro.
-You may set [`EC2_VM_MANAGER`]({{< ref "configuration#ec2" >}}) to `docker` to explicitly use this VM manager.
+You may set [`EC2_VM_MANAGER`](/aws/capabilities/config/configuration#ec2) to `docker` to explicitly use this VM manager.
All launched EC2 instances have the Docker socket mounted inside them at `/var/run/docker.sock` to make Docker-in-Docker usecases possible.
@@ -255,9 +246,9 @@ These can be used to launch EC2 instances which are in fact Docker containers.
You can mark any Docker base image as AMI using the below command:
-{{< command >}}
-$ docker tag ubuntu:focal localstack-ec2/ubuntu-focal-ami:ami-000001
-{{< /command >}}
+```bash
+docker tag ubuntu:focal localstack-ec2/ubuntu-focal-ami:ami-000001
+```
The above example will make LocalStack treat the `ubuntu:focal` Docker image as an AMI with name `ubuntu-focal-ami` and ID `ami-000001`.
@@ -265,22 +256,23 @@ At startup, LocalStack downloads the following AMIs that can be used to launch D
- Ubuntu 22.04 `ami-df5de72bdb3b`
- Amazon Linux 2023 `ami-024f768332f0`
-{{< callout "note" >}}
+:::note
The auto download of Docker images for default AMIs can be disabled using the `EC2_DOWNLOAD_DEFAULT_IMAGES=0` configuration variable.
-{{< /callout >}}
+:::
All LocalStack-managed Docker AMIs bear the resource tag `ec2_vm_manager:docker`.
These can be listed using:
-{{< command >}}
-$ awslocal ec2 describe-images --filters Name=tag:ec2_vm_manager,Values=docker
-{{< /command >}}
+```bash
+awslocal ec2 describe-images \
+ --filters Name=tag:ec2_vm_manager,Values=docker
+```
-{{< callout "note" >}}
+:::note
If an AMI does not have the `ec2_vm_manager:docker` tag, it means that it is mocked.
Attempting to launch Dockerized instances using these AMIs will result in an `InvalidAMIID.NotFound` error.
See [Mock VM manager](#mock-vm-manager).
-{{< /callout >}}
+:::
AWS does not provide an API to download AMIs which prevents the use of real AWS AMIs on LocalStack.
However, in certain cases it may be possible to tweak your workflow to make it work with Localstack.
@@ -303,11 +295,11 @@ The execution log is generated at `/var/log/cloud-init-output.log` in the contai
### Networking
-{{< callout "note" >}}
+:::note
Network access from host to EC2 instance containers is not possible on macOS.
This is because Docker Desktop on macOS does not expose the bridge network to the host system.
See [Docker Desktop Known Limitations](https://docs.docker.com/desktop/networking/#known-limitations).
-{{< /callout >}}
+:::
Network addresses for Dockerized instances are allocated by the Docker daemon and can be obtained from the `PublicIpAddress` attribute.
These addresses are also printed in the logs while the instance is being initialized.
@@ -321,23 +313,22 @@ If not found, it installs and starts the [Dropbear](https://github.com/mkj/dropb
To be able to access the instance at additional ports from the host system, you can modify the default security group and include the required ingress ports.
-{{< callout "note" >}}
+:::note
Security group ingress rules are applied only during the creation of the Dockerized instance.
Modifying a security group will not open any ports for a running instance.
-{{< /callout >}}
+:::
The system supports up to 32 ingress ports.
This constraint is in place to prevent exhausting free ports on the host.
-{{< command >}}
-$ awslocal ec2 authorize-security-group-ingress \
+```bash
+awslocal ec2 authorize-security-group-ingress \
--group-id default \
--protocol tcp \
--port 8080
-{{< /command >}}
-{{< command >}}
-$ awslocal ec2 describe-security-groups --group-names default
-{{< /command >}}
+
+awslocal ec2 describe-security-groups --group-names default
+```
The port mapping details are provided in the logs when the instance starts up.
@@ -350,14 +341,15 @@ The port mapping details are provided in the logs when the instance starts up.
A common use case is to attach an EBS block device to an EC2 instance, which can then be used to create a custom filesystem for additional storage.
This section illustrates how this functionality can be achieved with EC2 Docker instances in LocalStack.
-{{< callout "note" >}}
+:::note
This feature is disabled by default.
-Please set the [`EC2_MOUNT_BLOCK_DEVICES`]({{< ref "configuration#ec2" >}}) configuration option to enable it.
-{{< /callout >}}
+Please set the [`EC2_MOUNT_BLOCK_DEVICES`](/aws/capabilities/config/configuration#ec2) configuration option to enable it.
+:::
First, we create a user data script `init.sh` which creates an ext3 file system on the block device `/ebs-dev/sda1` and mounts it under `/ebs-mounted`:
-{{< command >}}
-$ cat > init.sh < init.sh <}}
+```
We can then start an EC2 instance, specifying a block device mapping under the device name `/ebs-dev/sda1`, and pointing to our `init.sh` user data script:
-{{< command >}}
+
+```bash
$ awslocal ec2 run-instances --image-id ami-ff0fea8310f3 --count 1 --instance-type t3.nano \
--block-device-mapping '{"DeviceName":"/ebs-dev/sda1","Ebs":{"VolumeSize":10}}' \
--user-data file://init.sh
-{{< /command >}}
+```
Please note that, whereas real AWS uses GiB for volume sizes, LocalStack uses MiB as the unit for `VolumeSize` in the command above (to avoid creating huge files locally).
-Also, by default block device images are limited to 1 GiB in size, but this can be customized by setting the [`EC2_EBS_MAX_VOLUME_SIZE`]({{< ref "configuration#ec2" >}}) config variable (defaults to `1000`).
+Also, by default block device images are limited to 1 GiB in size, but this can be customized by setting the [`EC2_EBS_MAX_VOLUME_SIZE`](/aws/capabilities/config/configuration#ec2) config variable (defaults to `1000`).
Once the instance is successfully started and initialized, we can first determine the container ID via `docker ps`, and then list the contents of the mounted filesystem `/ebs-mounted`, which should contain our test file named `my-test-file`:
-{{< command >}}
-$ docker ps
+
+```bash
+docker ps
+```
+
+The output will be:
+
+```bash
CONTAINER ID IMAGE PORTS NAMES
5c60cf72d84a ...:ami-ff0fea8310f3 19419->22/tcp localstack-ec2...
-$ docker exec 5c60cf72d84a ls /ebs-mounted
+```
+
+You can then list the contents of the mounted filesystem `/ebs-mounted`, which should contain our test file named `my-test-file`:
+
+```bash
+docker exec 5c60cf72d84a ls /ebs-mounted
+```
+
+The output will be:
+
+```bash
my-test-file
-{{< /command >}}
+```
### Instance Metadata Service
@@ -397,19 +406,19 @@ If the `X-aws-ec2-metadata-token` header is present, LocalStack will use IMDSv2,
To create an IMDSv2 token, run the following inside the EC2 container:
-{{< command >}}
-$ curl -X PUT "http://169.254.169.254/latest/api/token" -H "x-aws-ec2-metadata-token-ttl-seconds: 300"
-{{< /command >}}
+```bash
+curl -X PUT "http://169.254.169.254/latest/api/token" -H "x-aws-ec2-metadata-token-ttl-seconds: 300"
+```
The token can be used in subsequent requests like so:
-{{< command >}}
-$ curl -H "x-aws-ec2-metadata-token: " -v http://169.254.169.254/latest/meta-data/
-{{< /command >}}
+```bash
+curl -H "x-aws-ec2-metadata-token: " -v http://169.254.169.254/latest/meta-data/
+```
-{{< callout "note" >}}
+:::note
IMDS IPv6 endpoint is currently not supported.
-{{< /callout >}}
+:::
#### Metadata Categories
@@ -429,7 +438,7 @@ If you would like support for more metadata categories, please make a feature re
### Configuration
-You can use the [`EC2_DOCKER_FLAGS`]({{< ref "configuration#ec2" >}}) LocalStack configuration variable to pass supplementary flags to Docker during the initiation of containerized instances.
+You can use the [`EC2_DOCKER_FLAGS`](/aws/capabilities/config/configuration#ec2) LocalStack configuration variable to pass supplementary flags to Docker during the initiation of containerized instances.
This allows for fine-tuned behaviours, for example, running containers in privileged mode using `--privileged` or specifying an alternate CPU platform with `--platform`.
Keep in mind that this will apply to all instances that are launched in the LocalStack session.
@@ -450,11 +459,11 @@ Any operation not listed below will use the mock VM manager.
## Libvirt VM Manager
-{{< callout "note" >}}
+:::note
The Libvirt VM manager is under active development.
It is currently offered as a preview and will be part of the Ultimate plan upon release.
If a functionality you desire is missing, please create a feature request on the [GitHub issue tracker](https://github.com/localstack/localstack/issues/new/choose).
-{{< /callout >}}
+:::
The Libvirt VM manager uses the [Libvirt](https://libvirt.org/index.html) API to create fully virtualized EC2 resources.
This lets you create EC2 setups which closely resemble AWS EC2.
@@ -463,42 +472,48 @@ Currently LocalStack Pro supports the KVM-accelerated QEMU hypervisor on Linux h
Installation steps for QEMU/KVM will vary based on the Linux distribution on the host machine.
On Debian/Ubuntu-based distributions, you can run:
-{{< command >}}
-$ sudo apt install -y qemu-kvm libvirt-daemon-system
-{{< /command >}}
+```bash
+sudo apt install -y qemu-kvm libvirt-daemon-system
+```
To check CPU support for virtualization, run:
-{{< command >}}
-$ kvm-ok
+
+```
+kvm-ok
+```
+
+The output will be:
+
+```bash
INFO: /dev/kvm exists
KVM acceleration can be used
-{{< /command >}}
+```
-{{< callout "tip" >}}
+:::note
You may also need to enable virtualization support at hardware level.
This is often labelled as 'Virtualization Technology', 'VT-d' or 'VT-x' in UEFI/BIOS setups.
-{{< /callout >}}
+:::
If the Docker host and Libvirt host is the same, the Libvirt socket on the host must be mounted inside the LocalStack container.
This can be done by including the volume mounts when the LocalStack container is started.
-If you are using the [Docker Compose template]({{< ref "getting-started/installation#docker-compose" >}}), include the following line in `services.localstack.volumes` list:
+If you are using the [Docker Compose template](/aws/getting-started/installation#docker-compose), include the following line in `services.localstack.volumes` list:
```text
"/var/run/libvirt/libvirt-sock:/var/run/libvirt/libvirt-sock"
```
-If you are using [Docker CLI]({{< ref "getting-started/installation#docker" >}}), include the following parameter in `docker run`:
+If you are using [Docker CLI](/aws/getting-started/installation#docker), include the following parameter in `docker run`:
```text
-v /var/run/libvirt/libvirt-sock:/var/run/libvirt/libvirt-sock
```
-If you are using a remote Libvirt hypervisor, you can set the [`EC2_HYPERVISOR_URI`]({{< ref "configuration#ec2" >}}) config option with a connection URI.
+If you are using a remote Libvirt hypervisor, you can set the [`EC2_HYPERVISOR_URI`](/aws/capabilities/config/configuration#ec2) config option with a connection URI.
-{{< callout "tip" >}}
+:::note
If you encounter an error like `failed to connect to the hypervisor: Permission denied`, you may need to perform additional setup on the hypervisor host.
Please refer to [Libvirt Wiki](https://wiki.libvirt.org/Failed_to_connect_to_the_hypervisor.html#permission-denied) for more details.
-{{< /callout >}}
+:::
The Libvirt VM manager currently does not have full support for persistence.
Underlying virtual machines and volumes are not persisted, only their mock representations are.
@@ -508,67 +523,65 @@ Underlying virtual machines and volumes are not persisted, only their mock repre
All qcow2 images with cloud-init support can be used as AMIs.
You can find the download links for images of popular OSs below.
-{{< tabpane text=true >}}
-
-{{% tab "Ubuntu" %}}
+
+
Canonical provides official Ubuntu images at [cloud-images.ubuntu.com](https://cloud-images.ubuntu.com/).
Please use the images in qcow2 format ending in `.img`.
-{{% /tab %}}
+
+
+Debian provides cloud images for direct download at [cdimage.debian.org/cdimage/cloud](http://cdimage.debian.org/cdimage/cloud/).
+Please use the `genericcloud` image in qcow2 format.
+
-{{< tab "Debian" >}}
-
-Debian provides cloud images for direct download at cdimage.debian.org/cdimage/cloud.
-
+
+The Fedora project maintains the official cloud images at [fedoraproject.org/cloud/download](https://fedoraproject.org/cloud/download).
-
-Please use the genericcloud image in qcow2 format.
-
-{{< /tab >}}
-
-{{< tab "Fedora" >}}
-
-The Fedora project maintains the official cloud images at fedoraproject.org/cloud/download.
-
-
-
Please use the qcow2 images.
-
-{{< /tab >}}
-
-{{% tab "Microsoft Windows" %}}
+
+
An evaluation version of Windows Server 2012 R2 is provided by [Cloudbase Solutions](https://cloudbase.it/windows-cloud-images/).
-{{% /tab %}}
+
-{{< /tabpane >}}
+
LocalStack does not come preloaded with any AMIs.
Compatible qcow2 images must be placed at the default Libvirt storage pool at `/var/lib/libvirt/images` on the host machine.
Images must be named with the prefix `ami-` followed by at least 8 hexadecimal characters without an extension, e.g. `ami-1234abcd`.
+
You may need run the following command to make sure the image is registered with Libvirt:
-{{< command >}}
-$ virsh pool-refresh default
-
+```bash
+virsh pool-refresh default
+```
+
+The output will be:
+
+```bash
Pool default refreshed
-
-{{< /command >}}
-{{< command >}}
-$ virsh vol-list --pool default
-
+```
+
+You can then list the images with:
+
+```bash
+virsh vol-list --pool default
+```
+
+The output will be:
+
+```bash
Name Path
--------------------------------------------------------------------------------------------------------
ami-1234abcd /var/lib/libvirt/images/ami-1234abcd
-
-{{< /command >}}
+```
Only the images that follow the above naming scheme will be recognised by LocalStack as AMIs suitable for launching virtualized instances.
These AMIs will also have the resource tag `ec2_vm_manager:libvirt`.
-{{< command >}}
-$ awslocal ec2 describe-images --filters Name=tag:ec2_vm_manager,Values=libvirt
-{{< /command >}}
+```bash
+awslocal ec2 describe-images --filters Name=tag:ec2_vm_manager,Values=libvirt
+```
### Instances
@@ -582,25 +595,28 @@ If a key pair is provided, it will added as an authorised SSH key for this user.
LocalStack shuts down all virtual machines when it terminates.
The Libvirt domains and volumes are left defined and can be used for debugging, etc.
-{{< callout "tip" >}}
+:::note
Use [Virtual Machine Manager](https://virt-manager.org/) or [virsh](https://www.libvirt.org/manpages/virsh.html) to manage the virtual machines outside of LocalStack.
-{{< /callout >}}
+:::
The Libvirt VM manager supports basic shell scripts for user data.
This can be passed to the `UserData` parameter of the `RunInstances` operation.
To connect to the graphical display of the instance, first obtain the VNC address using:
-{{< command >}}
-$ virsh vncdisplay
+```bash
+virsh vncdisplay
+```
+
+The output will be:
+
+```bash
127.0.0.1:0
-{{< /command >}}
+```
You can then use a compatible VNC client (e.g. [TigerVNC](https://tigervnc.org/)) to connect and interact with the virtual machine.
-
-
-
+
### Networking
@@ -620,15 +636,15 @@ Use the following configuration at `/etc/docker/daemon.json` on the host machine
Then restart the Docker daemon:
-{{< command >}}
-$ sudo systemctl restart docker
-{{< /command >}}
+```bash
+sudo systemctl restart docker
+```
You can now start the LocalStack container, obtain its IP address and use it from the virtualized instance.
-{{< command >}}
-$ docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' localstack_main
-{{< /command >}}
+```bash
+docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' localstack_main
+```
### Elastic Block Stores
@@ -661,9 +677,7 @@ Any operation not listed below will use the mock VM manager.
The LocalStack Web Application provides a Resource Browser for managing EC2 instances.
You can access the Resource Browser by opening the LocalStack Web Application in your browser, navigating to the **Resources** section, and then clicking on **EC2** under the **Compute** section.
-
-
-
+
The Resource Browser allows you to perform the following actions:
- **Create Instance**: Create a new EC2 instance by clicking the **Launch Instance** button and specifying the AMI ID, instance type, and other parameters.
diff --git a/src/content/docs/aws/services/ecr.md b/src/content/docs/aws/services/ecr.md
index f9b7da45..194b823c 100644
--- a/src/content/docs/aws/services/ecr.md
+++ b/src/content/docs/aws/services/ecr.md
@@ -1,6 +1,5 @@
---
title: "Elastic Container Registry (ECR)"
-linkTitle: "Elastic Container Registry (ECR)"
description: Get started with Elastic Container Registry (ECR) on LocalStack
tags: ["Base"]
persistence: supported
@@ -13,7 +12,7 @@ ECR enables you to store, manage, and deploy Docker container images to build, s
ECR integrates with other AWS services, such as Lambda, ECS, and EKS.
LocalStack allows you to use the ECR APIs in your local environment to build & push Docker images to a local ECR registry.
-The supported APIs are available on our [API coverage page]({{< ref "coverage_ecr" >}}), which provides information on the extent of ECR's integration with LocalStack.
+The supported APIs are available on our [API coverage page](), which provides information on the extent of ECR's integration with LocalStack.
## Getting started
@@ -53,15 +52,15 @@ CMD /root/run_apache.sh
You can now build the Docker image from the `Dockerfile` using the `docker CLI:
-{{< command >}}
-$ docker build -t localstack-ecr-image .
-{{< / command >}}
+```bash
+docker build -t localstack-ecr-image .
+```
You can run the following command to verify that the image was built successfully:
-{{< command >}}
-$ docker images
-{{< / command >}}
+```bash
+docker images
+```
You will see output similar to the following:
@@ -77,15 +76,15 @@ To push the Docker image to ECR, you first need to create a repository.
You can create an ECR repository using the [`CreateRepository`](https://docs.aws.amazon.com/AmazonECR/latest/APIReference/API_CreateRepository.html) API.
Run the following command to create a repository named `localstack-ecr-repository`:
-{{< command >}}
-$ awslocal ecr create-repository \
+```bash
+awslocal ecr create-repository \
--repository-name localstack-ecr-repository \
--image-scanning-configuration scanOnPush=true
-{{< / command >}}
+```
You will see an output similar to the following:
-```sh
+```bash
{
"repository": {
"repositoryArn": "arn:aws:ecr:us-east-1:000000000000:repository/localstack-ecr-repository",
@@ -111,22 +110,22 @@ You will need the `repositoryUri` value to push the Docker image to the reposito
To push the Docker image to the repository, you first need to tag the image with the `repositoryUri`.
Run the following command to tag the image:
-{{< command >}}
-$ docker tag localstack-ecr-image 000000000000.dkr.ecr.us-east-1.localhost.localstack.cloud:4566/localstack-ecr-repository
-{{< / command >}}
+```bash
+docker tag localstack-ecr-image 000000000000.dkr.ecr.us-east-1.localhost.localstack.cloud:4566/localstack-ecr-repository
+```
You can now push the image to the repository using the `docker` CLI:
-{{< command >}}
-$ docker push 000000000000.dkr.ecr.us-east-1.localhost.localstack.cloud:4566/localstack-ecr-repository
-{{< / command >}}
+```bash
+docker push 000000000000.dkr.ecr.us-east-1.localhost.localstack.cloud:4566/localstack-ecr-repository
+```
The image will take a few seconds to push to the repository.
You can run the following command to verify that the image was pushed successfully:
-{{< command >}}
-$ awslocal ecr list-images --repository-name localstack-ecr-repository
-{{< / command >}}
+```bash
+awslocal ecr list-images --repository-name localstack-ecr-repository
+```
You will see an output similar to the following:
@@ -146,7 +145,7 @@ You will see an output similar to the following:
The LocalStack Web Application provides a Resource Browser for managing ECR repositories and images.
You can access the Resource Browser by opening the LocalStack Web Application in your browser, navigating to the **Resources** section, and then clicking on **ECR** under the **Compute** section.
-
+
The Resource Browser allows you to perform the following actions:
diff --git a/src/content/docs/aws/services/ecs.md b/src/content/docs/aws/services/ecs.md
index b6f80bcc..0e0e29f2 100644
--- a/src/content/docs/aws/services/ecs.md
+++ b/src/content/docs/aws/services/ecs.md
@@ -1,6 +1,5 @@
---
title: "Elastic Container Service (ECS)"
-linkTitle: "Elastic Container Service (ECS)"
tags: ["Base"]
description: Get started with Elastic Container Service (ECS) on LocalStack
persistence: supported
@@ -13,7 +12,7 @@ It allows you to run, stop, and manage Docker containers on a cluster.
ECS eliminates the need for you to install, operate, and scale your own cluster management infrastructure.
LocalStack allows you to use the ECS APIs in your local environment to create & manage ECS clusters, tasks, and services.
-The supported APIs are available on our [API coverage page]({{< ref "coverage_ecs" >}}), which provides information on the extent of ECS's integration with LocalStack.
+The supported APIs are available on our [API coverage page](), which provides information on the extent of ECS's integration with LocalStack.
## Getting Started
@@ -24,16 +23,20 @@ We will demonstrate how to create an ECS service using the AWS CLI
### Create a cluster
-{{< callout >}}
+:::note
By default, the **ECS Fargate** launch type is assumed, i.e., the local Docker engine is used for deployment of applications, and there is no need to create and manage EC2 virtual machines to run the containers.
-{{< /callout >}}
+:::
ECS tasks and services run on a cluster.
Execute the following command to create an ECS cluster named `mycluster`:
-{{< command >}}
-$ awslocal ecs create-cluster --cluster-name mycluster
-
+```bash
+awslocal ecs create-cluster --cluster-name mycluster
+```
+
+The output will be:
+
+```json
{
"cluster": {
"clusterArn": "arn:aws:ecs:us-east-1:000000000000:cluster/mycluster",
@@ -51,8 +54,7 @@ $ awslocal ecs create-cluster --cluster-name mycluster
]
}
}
-
-{{< / command >}}
+```
### Create a task definition
@@ -90,9 +92,13 @@ To create a task definition that runs an `ubuntu` container forever (by running
and then run the following command:
-{{< command >}}
-$ awslocal ecs register-task-definition --cli-input-json file://task_definition.json
-
+```bash
+awslocal ecs register-task-definition --cli-input-json file://task_definition.json
+```
+
+The output will be:
+
+```json
{
"taskDefinition": {
"taskDefinitionArn": "arn:aws:ecs:us-east-1:000000000000:task-definition/myfamily:1",
@@ -136,8 +142,7 @@ $ awslocal ecs register-task-definition --cli-input-json file://task_definition.
"registeredAt": 1713364207.068659
}
}
-
-{{< / command >}}
+```
Task definitions are immutable, and are identified by their `family` field, and calling `register-task-definition` again with the same `family` value creates a new _version_ of a task definition.
@@ -149,9 +154,13 @@ Finally we launch an ECS service using the task definition above.
This will create a number of containers in replica mode meaning they are distributed over the nodes of the cluster, or in the case of Fargate, over availability zones within the region of the cluster.
To create a service, execute the following command:
-{{< command >}}
-$ awslocal ecs create-service --service-name myservice --cluster mycluster --task-definition myfamily --desired-count 1
-
+```bash
+awslocal ecs create-service --service-name myservice --cluster mycluster --task-definition myfamily --desired-count 1
+```
+
+The output will be:
+
+```json
{
"service": {
"serviceArn": "arn:aws:ecs:us-east-1:000000000000:service/mycluster/myservice",
@@ -196,8 +205,7 @@ $ awslocal ecs create-service --service-name myservice --cluster mycluster --tas
"createdBy": "arn:aws:iam::000000000000:user/test"
}
}
-
-{{< / command >}}
+```
You should see a new docker container has been created, using the `ubuntu:latest` image, and running the infinite loop command:
@@ -212,9 +220,13 @@ CONTAINER ID IMAGE COMMAND CREATED
To access the generated logs from the container, run the following command:
-{{< command >}}
+```bash
awslocal logs filter-log-events --log-group-name myloggroup --query 'events[].message'
-
+```
+
+The output will be:
+
+```json
$ awslocal logs filter-log-events --log-group-name myloggroup | head -n 20
{
"events": [
@@ -236,10 +248,9 @@ $ awslocal logs filter-log-events --log-group-name myloggroup | head -n 20
"logStreamName": "myprefix/ls-ecs-mycluster-75f0515e-0364-4ee5-9828-19026140c91a-0-a1afaa9d/75f0515e-0364-4ee5-9828-19026140c91a",
"timestamp": 1713364216505,
"message": "running",
-
-{{< / command >}}
+```
-See our [CloudWatch Logs user guide]({{< ref "user-guide/aws/logs" >}}) for more details.
+See our [CloudWatch Logs user guide](/aws/services/cloudwatchlogs) for more details.
## LocalStack ECS behavior
@@ -250,7 +261,7 @@ If your ECS containers depend on LocalStack services, your ECS task network shou
If you are running LocalStack through a `docker run` command, do not forget to enable the communication from the container to the Docker Engine API.
You can provide the access by adding the following option `-v /var/run/docker.sock:/var/run/docker.sock`.
-For more information regarding the configuration of LocalStack, please check the [LocalStack configuration]({{< ref "configuration" >}}) section.
+For more information regarding the configuration of LocalStack, please check the [LocalStack configuration](/aws/capabilities/config/configuration) section.
## Remote debugging
@@ -261,7 +272,7 @@ Or if you are working with a single container, you can set `ECS_DOCKER_FLAGS="-p
## Mounting local directories for ECS tasks
In some cases, it can be useful to mount code from the host filesystem into the ECS container.
-For example, to enable a quick debugging loop where you can test changes without having to build and redeploy the task's Docker image each time - similar to the [Lambda Hot Reloading]({{< ref "hot-reloading" >}}) feature in LocalStack.
+For example, to enable a quick debugging loop where you can test changes without having to build and redeploy the task's Docker image each time - similar to the [Lambda Hot Reloading](/aws/services/lambda#hot-reloading) feature in LocalStack.
In order to leverage code mounting, we can use the ECS bind mounts feature, which is covered in the [AWS Bind mounts documentation](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/bind-mounts.html).
@@ -336,14 +347,14 @@ services:
- ~/.docker/config.json:/config.json:ro
```
-Alternatively, you can download the image from the private registry before using it or employ an [Initialization Hook]({{< ref "/references/init-hooks" >}}) to install the Docker client and use these credentials to download the image.
+Alternatively, you can download the image from the private registry before using it or employ an [Initialization Hook](/aws/capabilities/config/initalization-hooks) to install the Docker client and use these credentials to download the image.
## Firelens for ECS Tasks
-{{< callout >}}
+:::note
Firelens emulation is currently available as part of the **LocalStack Enterprise** plan.
If you'd like to try it out, please [contact us](https://www.localstack.cloud/demo) to request access.
-{{< /callout >}}
+:::
LocalStack's ECS emulation supports custom log routing via FireLens.
FireLens allows the ECS service to manage the configuration of the logging driver of application containers, and to create the proper configuration for the `fluentbit`/`fluentd` logging layer.
@@ -356,9 +367,7 @@ Additionally, you cannot use ECS on Kubernetes with FireLens.
The LocalStack Web Application provides a Resource Browser for managing ECS clusters & task definitions.
You can access the Resource Browser by opening the LocalStack Web Application in your browser, navigating to the **Resource Browser** section, and then clicking on **ECS** under the **Compute** section.
-
-
-
+
The Resource Browser allows you to perform the following actions:
diff --git a/src/content/docs/aws/services/efs.md b/src/content/docs/aws/services/efs.md
index 69571e11..3cea3797 100644
--- a/src/content/docs/aws/services/efs.md
+++ b/src/content/docs/aws/services/efs.md
@@ -1,6 +1,5 @@
---
title: "Elastic File System (EFS)"
-linkTitle: "Elastic File System (EFS)"
description: Get started with Elastic File System (EFS) on LocalStack
tags: ["Ultimate"]
---
@@ -12,7 +11,7 @@ EFS offers scalable and shared file storage that can be accessed by multiple EC2
EFS utilizes the Network File System protocol to allow it to be used as a data source for various applications and workloads.
LocalStack allows you to use the EFS APIs in your local environment to create local file systems, lifecycle configurations, and file system policies.
-The supported APIs are available on our [API coverage page]({{< ref "coverage_efs" >}}), which provides information on the extent of EFS's integration with LocalStack.
+The supported APIs are available on our [API coverage page](), which provides information on the extent of EFS's integration with LocalStack.
## Getting started
@@ -26,13 +25,13 @@ We will demonstrate how to create a file system, apply an IAM resource-based pol
To create a new, empty file system you can use the [`CreateFileSystem`](https://docs.aws.amazon.com/goto/WebAPI/elasticfilesystem-2015-02-01/CreateFileSystem) API.
Run the following command to create a new file system:
-{{< command >}}
-$ awslocal efs create-file-system \
+```bash
+awslocal efs create-file-system \
--performance-mode generalPurpose \
--throughput-mode bursting \
--encrypted \
--tags Key=Name,Value=my-file-system
-{{< /command >}}
+```
The following output would be retrieved:
@@ -58,9 +57,9 @@ The following output would be retrieved:
You can also describe the locally available file systems using the [`DescribeFileSystems`](https://docs.aws.amazon.com/efs/latest/ug/API_DescribeFileSystems.html) API.
Run the following command to describe the local file systems available:
-{{< command >}}
-$ awslocal efs describe-file-systems
-{{< /command >}}
+```bash
+awslocal efs describe-file-systems
+```
You can alternatively pass the `--file-system-id` parameter to the `describe-file-system` command to retrieve information about a specific file system in AWS CLI.
@@ -69,19 +68,19 @@ You can alternatively pass the `--file-system-id` parameter to the `describe-fil
You can apply an EFS `FileSystemPolicy` to an EFS file system using the [`PutFileSystemPolicy`](https://docs.aws.amazon.com/efs/latest/ug/API_PutFileSystemPolicy.html) API.
Run the following command to apply a policy to the file system created in the previous step:
-{{< command >}}
-$ awslocal efs put-file-system-policy \
+```bash
+awslocal efs put-file-system-policy \
--file-system-id \
--policy "{\"Version\":\"2012-10-17\",\"Id\":\"ExamplePolicy01\",\"Statement\":[{\"Sid\":\"ExampleStatement01\",\"Effect\":\"Allow\",\"Principal\":{\"AWS\":\"*\"},\"Action\":[\"elasticfilesystem:ClientMount\",\"elasticfilesystem:ClientWrite\"],\"Resource\":\"arn:aws:elasticfilesystem:us-east-1:000000000000:file-system/fs-34feac549e66b814\"}]}"
-{{< /command >}}
+```
You can list the file system policies using the [`DescribeFileSystemPolicy`](https://docs.aws.amazon.com/efs/latest/ug/API_DescribeFileSystemPolicy.html) API.
Run the following command to list the file system policies:
-{{< command >}}
-$ awslocal efs describe-file-system-policy \
+```bash
+awslocal efs describe-file-system-policy \
--file-system-id
-{{< /command >}}
+```
Replace `` with the ID of the file system you want to list the policies for.
The output will return the `FileSystemPolicy` for the specified EFS file system.
@@ -91,11 +90,11 @@ The output will return the `FileSystemPolicy` for the specified EFS file system.
You can create a lifecycle configuration for an EFS file system using the [`PutLifecycleConfiguration`](https://docs.aws.amazon.com/efs/latest/ug/API_PutLifecycleConfiguration.html) API.
Run the following command to create a lifecycle configuration for the file system created in the previous step:
-{{< command >}}
-$ awslocal efs put-lifecycle-configuration \
+```bash
+awslocal efs put-lifecycle-configuration \
--file-system-id \
--lifecycle-policies "{\"TransitionToIA\":\"AFTER_30_DAYS\"}"
-{{< /command >}}
+```
The following output would be retrieved:
diff --git a/src/content/docs/aws/services/eks.md b/src/content/docs/aws/services/eks.md
index 53518df5..b19f50ce 100644
--- a/src/content/docs/aws/services/eks.md
+++ b/src/content/docs/aws/services/eks.md
@@ -1,6 +1,5 @@
---
title: "Elastic Kubernetes Service (EKS)"
-linkTitle: "Elastic Kubernetes Service (EKS)"
description: Get started with Elastic Kubernetes Service (EKS) on LocalStack
tags: ["Ultimate"]
persistence: supported with limitations
@@ -12,7 +11,7 @@ Elastic Kubernetes Service (EKS) is a managed Kubernetes service that makes it e
Kubernetes is an open-source system for automating containerized applications' deployment, scaling, and management.
LocalStack allows you to use the EKS APIs in your local environment to spin up embedded Kubernetes clusters in your local Docker engine or use an existing Kubernetes installation you can access from your local machine (defined in `$HOME/.kube/config`).
-The supported APIs are available on our [API coverage page]({{< ref "coverage_eks" >}}), which provides information on the extent of EKS's integration with LocalStack.
+The supported APIs are available on our [API coverage page](), which provides information on the extent of EKS's integration with LocalStack.
## Getting started
@@ -31,12 +30,12 @@ In most cases, the installation is automatic, eliminating the need for any manua
You can create a new cluster using the [`CreateCluster`](https://docs.aws.amazon.com/eks/latest/APIReference/API_CreateCluster.html) API.
Run the following command:
-{{< command >}}
-$ awslocal eks create-cluster \
+```bash
+awslocal eks create-cluster \
--name cluster1 \
--role-arn "arn:aws:iam::000000000000:role/eks-role" \
--resources-vpc-config "{}"
-{{ command >}}
+```
You can see an output similar to the following:
@@ -59,30 +58,37 @@ You can see an output similar to the following:
}
```
-{{< callout >}}
+:::note
When setting up a local EKS cluster, if you encounter a `"status": "FAILED"` in the command output and see `Unable to start EKS cluster` in LocalStack logs, remove or rename the `~/.kube/config` file on your machine and retry.
The CLI mounts this file automatically for CLI versions before `3.7`, leading EKS to assume you intend to use the specified cluster, a feature that has specific requirements.
-{{< /callout >}}
+:::
You can use the `docker` CLI to check that some containers have been created:
-{{< command >}}
-$ docker ps
-
+```bash
+docker ps
+```
+
+The output will be:
+
+```bash
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
...
b335f7f089e4 rancher/k3d-proxy:5.0.1-rc.1 "/bin/sh -c nginx-pr…" 1 minute ago Up 1 minute 0.0.0.0:8081->80/tcp, 0.0.0.0:44959->6443/tcp k3d-cluster1-serverlb
f05770ec8523 rancher/k3s:v1.21.5-k3s2 "/bin/k3s server --t…" 1 minute ago Up 1 minute
...
-
-{{< / command >}}
+```
After successfully creating and initializing the cluster, we can easily find the server endpoint, using the [`DescribeCluster`](https://docs.aws.amazon.com/eks/latest/APIReference/API_DescribeCluster.html) API.
Run the following command:
-{{< command >}}
-$ awslocal eks describe-cluster --name cluster1
-
+```bash
+awslocal eks describe-cluster --name cluster1
+```
+
+The output will be:
+
+```json
{
"cluster": {
"name": "cluster1",
@@ -103,8 +109,7 @@ $ awslocal eks describe-cluster --name cluster1
"clientRequestToken": "d188f578-b353-416b-b309-5d8c76ecc4e2"
}
}
-
-{{< / command >}}
+```
### Utilizing ECR Images within EKS
@@ -112,17 +117,17 @@ You can now use ECR (Elastic Container Registry) images within your EKS environm
#### Initial configuration
-To modify the return value of resource URIs for most services, including ECR, you can utilize the `LOCALSTACK_HOST` variable in the [configuration]({{< ref "configuration" >}}).
+To modify the return value of resource URIs for most services, including ECR, you can utilize the `LOCALSTACK_HOST` variable in the [configuration](/aws/capabilities/config/configuration).
By default, ECR returns a `repositoryUri` starting with `localhost.localstack.cloud`, such as: `localhost.localstack.cloud:/`.
-{{< callout >}}
+:::note
In this section, we assume that `localhost.localstack.cloud` resolves in your environment, and LocalStack is connected to a non-default bridge network.
-For more information, refer to the article about [DNS rebind protection]({{< ref "dns-server#dns-rebind-protection" >}}).
+For more information, refer to the article about [DNS rebind protection](/aws/tooling/dns-server#dns-rebind-protection).
If the domain `localhost.localstack.cloud` does not resolve on your host, you can still proceed by setting `LOCALSTACK_HOST=localhost` (not recommended).
LocalStack will take care of the DNS resolution of `localhost.localstack.cloud` within ECR itself, allowing you to use the `localhost:/` URI for tagging and pushing the image on your host.
-{{< /callout >}}
+:::
Once you have configured this correctly, you can seamlessly use your ECR image within EKS as expected.
@@ -134,9 +139,13 @@ For the purpose of this guide, we will retag the `nginx` image to be pushed to a
You can create a new ECR repository using the [`CreateRepository`](https://docs.aws.amazon.com/AmazonECR/latest/APIReference/API_CreateRepository.html) API.
Run the following command:
-{{< command >}}
-$ awslocal ecr create-repository --repository-name "fancier-nginx"
-
+```bash
+awslocal ecr create-repository --repository-name "fancier-nginx"
+```
+
+The output will be:
+
+```json
{
"repository": {
"repositoryArn": "arn:aws:ecr:us-east-1:000000000000:repository/fancier-nginx",
@@ -153,47 +162,49 @@ $ awslocal ecr create-repository --repository-name "fancier-nginx"
}
}
}
-
-{{< / command >}}
+```
You can now pull the `nginx` image from Docker Hub using the `docker` CLI:
-{{< command >}}
-$ docker pull nginx
-{{< / command >}}
+```bash
+docker pull nginx
+```
You can further tag the image to be pushed to ECR:
-{{< command >}}
-$ docker tag nginx 000000000000.dkr.ecr.us-east-1.localhost.localstack.cloud:4566/fancier-nginx
-{{< / command >}}
+```bash
+docker tag nginx 000000000000.dkr.ecr.us-east-1.localhost.localstack.cloud:4566/fancier-nginx
+```
Finally, you can push the image to local ECR:
-{{< command >}}
-$ docker push 000000000000.dkr.ecr.us-east-1.localhost.localstack.cloud:4566/fancier-nginx
-{{< / command >}}
+```bash
+docker push 000000000000.dkr.ecr.us-east-1.localhost.localstack.cloud:4566/fancier-nginx
+```
Now, let us set up the EKS cluster using the image pushed to local ECR.
Next, we can configure `kubectl` to use the EKS cluster, using the [`UpdateKubeconfig`](https://docs.aws.amazon.com/eks/latest/APIReference/API_UpdateClusterConfig.html) API.
Run the following command:
-{{< command >}}
-$ awslocal eks update-kubeconfig --name cluster1 && \
+```bash
+awslocal eks update-kubeconfig --name cluster1 && \
kubectl config use-context arn:aws:eks:us-east-1:000000000000:cluster/cluster1
-
+```
+
+The output will be:
+
+```bash
...
Added new context arn:aws:eks:us-east-1:000000000000:cluster/cluster1 to /home/localstack/.kube/config
Switched to context "arn:aws:eks:us-east-1:000000000000:cluster/cluster1".
...
-
-{{< / command >}}
+```
You can now go ahead and add a deployment configuration for the `fancier-nginx` image.
-{{< command >}}
-$ cat <}}
+```
You can now describe the pod to see if the image was pulled successfully:
-{{< command >}}
-$ kubectl describe pod fancier-nginx
-{{< / command >}}
+```bash
+kubectl describe pod fancier-nginx
+```
In the events, we can see that the pull from ECR was successful:
@@ -230,9 +241,9 @@ In the events, we can see that the pull from ECR was successful:
Normal Pulled 10s kubelet Successfully pulled image "000000000000.dkr.ecr.us-east-1.localhost.localstack.cloud:4566/fancier-nginx:latest" in 2.412775896s
```
-{{< callout "tip" >}}
-Public Docker images from `registry.k8s.io` can be pulled without additional configuration from EKS nodes, but if you pull images from any other locations that resolve to S3 you can configure `DNS_NAME_PATTERNS_TO_RESOLVE_UPSTREAM=\.s3.*\.amazonaws\.com` in your [configuration]({{< ref "configuration" >}}).
-{{< /callout >}}
+:::note
+Public Docker images from `registry.k8s.io` can be pulled without additional configuration from EKS nodes, but if you pull images from any other locations that resolve to S3 you can configure `DNS_NAME_PATTERNS_TO_RESOLVE_UPSTREAM=\.s3.*\.amazonaws\.com` in your [configuration](/aws/capabilities/config/configuration).
+:::
### Configuring an Ingress for your services
@@ -240,8 +251,8 @@ To make an EKS service externally accessible, it is necessary to create an Ingre
For our sample deployment, we can create an `nginx` Kubernetes service by applying the following configuration:
-{{< command >}}
-$ cat <}}
+```
Use the following ingress configuration to expose the `nginx` service on path `/test123`:
-{{< command >}}
-$ cat <}}
+```
You will be able to send a request to `nginx` via the load balancer port `8081` from the host:
-{{< command >}}
-$ curl http://localhost:8081/test123
-
+```bash
+curl http://localhost:8081/test123
+```
+
+The output will be:
+
+```bash
...
nginx/1.21.6
...
-
-{{< / command >}}
+```
-{{< callout "tip" >}}
+:::note
You can customize the Load Balancer port by configuring `EKS_LOADBALANCER_PORT` in your environment.
-{{< /callout >}}
+:::
### Enabling HTTPS with local SSL/TLS certificate for the Ingress
@@ -325,10 +339,10 @@ Once you have deployed your service using the mentioned ingress configuration, i
Remember that the ingress controller does not support HTTP/HTTPS multiplexing within the same Ingress.
Consequently, if you want your service to be accessible via HTTP and HTTPS, you must create two separate Ingress definitions — one Ingress for HTTP and another for HTTPS.
-{{< callout >}}
+:::note
The `ls-secret-tls` secret is created in the `default` namespace.
If your ingress and services are residing in a custom namespace, it is essential to copy the secret to that custom namespace to make use of it.
-{{< /callout >}}
+:::
## Use an existing Kubernetes installation
@@ -343,25 +357,29 @@ volumes:
When using the LocalStack CLI, please configure the `DOCKER_FLAGS` to mount the kubeconfig into the container:
-{{< command >}}
-$ DOCKER_FLAGS="-v ${HOME}/.kube/config:/root/.kube/config" localstack start
-{{ command >}}
+```bash
+DOCKER_FLAGS="-v ${HOME}/.kube/config:/root/.kube/config" localstack start
+```
-{{< callout >}}
+:::note
Using an existing Kubernetes installation is currently only possible when the authentication with the cluster uses X509 client certificates: https://kubernetes.io/docs/reference/access-authn-authz/authentication/#x509-client-certificates
-{{< /callout >}}
+:::
In recent versions of Docker, you can enable Kubernetes as an embedded service running inside Docker.
The picture below illustrates the Kubernetes settings in Docker for macOS (similar configurations apply for Linux/Windows).
By default, the Kubernetes API is assumed to run on the local TCP port `6443`.
-
+
You can create an EKS Cluster configuration using the following command:
-{{< command >}}
-$ awslocal eks create-cluster --name cluster1 --role-arn arn:aws:iam::000000000000:role/eks-role --resources-vpc-config '{}'
-
+```bash
+awslocal eks create-cluster --name cluster1 --role-arn arn:aws:iam::000000000000:role/eks-role --resources-vpc-config '{}'
+```
+
+The output will be:
+
+```json
{
"cluster": {
"name": "cluster1",
@@ -372,21 +390,23 @@ $ awslocal eks create-cluster --name cluster1 --role-arn arn:aws:iam::0000000000
...
}
}
-
-{{ command >}}
+```
And check that it was created with:
-{{< command >}}
-$ awslocal eks list-clusters
-
+```bash
+awslocal eks list-clusters
+```
+
+The output will be:
+
+```json
{
"clusters": [
"cluster1"
]
}
-
-{{< / command >}}
+```
To interact with your Kubernetes cluster, configure your Kubernetes client (such as `kubectl` or other SDKs) to point to the `endpoint` provided in the `create-cluster` output mentioned earlier.
However, depending on whether you're calling the Kubernetes API from your local machine or from within a Lambda function, you might need to use different endpoint URLs.
@@ -403,12 +423,12 @@ If you need to customize the port or expose the load balancer on multiple ports,
For instance, if you want to expose the load balancer on ports 8085 and 8086, you can use the following tag definition when creating the cluster:
-{{< command >}}
-$ awslocal eks create-cluster \
+```bash
+awslocal eks create-cluster \
--name cluster1 \
--role-arn arn:aws:iam::000000000000:role/eks-role \
--resources-vpc-config '{}' --tags '{"_lb_ports_":"8085,8086"}'
-{{< /command >}}
+```
## Routing Traffic to Services on Different Endpoints
@@ -419,8 +439,8 @@ In such cases, path-based routing may not be ideal if you need the services to b
To address this requirement, we recommend utilizing host-based routing rules, as demonstrated in the example below:
-{{< command >}}
-$ cat <}}
+```
The example defines routing rules for two local endpoints - the first rule points to a service `service-1` accessible under `/v1`, and the second rule points to a service `service-2` accessible under the same path `/v1`.
@@ -461,16 +481,25 @@ Similarly, the second rule points to a service named `service-2`, also accessibl
This approach enables us to access the two distinct services using the same path and port number, but with different host names.
This host-based routing mechanism ensures that each service is uniquely identified based on its designated host name, allowing for a uniform and organized way of accessing multiple services within the EKS cluster.
-{{< command >}}
-$ curl http://eks-service-1.localhost.localstack.cloud:8081/v1
-
+```bash
+curl http://eks-service-1.localhost.localstack.cloud:8081/v1
+```
+
+The output will be:
+
+```bash
... [output of service 1]
-
-$ curl http://eks-service-2.localhost.localstack.cloud:8081/v1
-
+```
+
+```bash
+curl http://eks-service-2.localhost.localstack.cloud:8081/v1
+```
+
+The output will be:
+
+```bash
... [output of service 2]
-
-{{< /command >}}
+```
It is important to note that the host names `eks-service-1.localhost.localstack.cloud` and `eks-service-2.localhost.localstack.cloud` both resolve to `127.0.0.1` (localhost).
Consequently, you can utilize them to communicate with your service endpoints and distinguish between different services within the Kubernetes load balancer.
@@ -489,13 +518,17 @@ If you have specific directories that you want to mount from your local developm
When creating your cluster, include the special tag `_volume_mount_`, which allows you to define the desired volume mounting configuration from your local development machine to the cluster nodes.
-{{< command >}}
-$ awslocal eks create-cluster \
+```bash
+awslocal eks create-cluster \
--name cluster1 \
--role-arn arn:aws:iam::000000000000:role/eks-role \
--resources-vpc-config '{}' \
--tags '{"_volume_mount_":"/path/on/host:/path/on/node"}'
-
+```
+
+The output will be:
+
+```json
{
"cluster": {
"name": "cluster1",
@@ -509,13 +542,12 @@ $ awslocal eks create-cluster \
...
}
}
-
-{{< / command >}}
+```
-{{< callout >}}
+:::note
Note that the tag was previously referred to as `__k3d_volume_mount__`, but it has now been renamed to `_volume_mount_`.
As a result, the tag name `__k3d_volume_mount__` is considered deprecated and will be removed in an upcoming release.
-{{< /callout >}}
+:::
After creating your cluster with the `_volume_mount_` tag, you can create your path with volume mounts as usual.
The configuration for the volume mounts can be set up similar to this:
@@ -572,9 +604,7 @@ Users can specify the desired version when creating an EKS cluster in LocalStack
The LocalStack Web Application provides a Resource Browser for managing EKS clusters.
You can access the Resource Browser by opening the LocalStack Web Application in your browser, navigating to the **Resource Browser** section, and then clicking on **EKS** under the **Compute** section.
-
-
-
+
The Resource Browser allows you to perform the following actions:
diff --git a/src/content/docs/aws/services/elasticache.md b/src/content/docs/aws/services/elasticache.md
index 0904ea10..2cda83d3 100644
--- a/src/content/docs/aws/services/elasticache.md
+++ b/src/content/docs/aws/services/elasticache.md
@@ -1,6 +1,5 @@
---
title: "ElastiCache"
-linkTitle: "ElastiCache"
tags: ["Base"]
description: Get started with AWS ElastiCache on LocalStack
persistence: supported
@@ -15,7 +14,7 @@ It supports popular open-source caching engines like Redis and Memcached (LocalS
providing a means to efficiently store and retrieve frequently accessed data with minimal latency.
LocalStack supports ElastiCache via the Pro offering, allowing you to use the ElastiCache APIs in your local environment.
-The supported APIs are available on our [API Coverage Page]({{< ref "references/coverage/coverage_elasticache" >}}),
+The supported APIs are available on our [API Coverage Page](),
which provides information on the extent of ElastiCache integration with LocalStack.
## Getting started
@@ -26,82 +25,87 @@ This guide is designed for users new to ElastiCache and assumes basic knowledge
After starting LocalStack Pro, you can create a cluster with the following command.
-{{< command >}}
-$ awslocal elasticache create-cache-cluster \
+```bash
+awslocal elasticache create-cache-cluster \
--cache-cluster-id my-redis-cluster \
--cache-node-type cache.t2.micro \
--engine redis \
--num-cache-nodes 1
-{{< /command>}}
+```
Wait for it to be available, then you can use the cluster endpoint for Redis operations.
-{{< command >}}
-$ awslocal elasticache describe-cache-clusters --show-cache-node-info --query "CacheClusters[0].CacheNodes[0].Endpoint"
+```bash
+awslocal elasticache describe-cache-clusters --show-cache-node-info --query "CacheClusters[0].CacheNodes[0].Endpoint"
+```
+
+The output will be:
+
+```json
{
"Address": "localhost.localstack.cloud",
"Port": 4510
}
-{{< /command >}}
+```
-The cache cluster uses a random port of the [external service port range]({{< ref "external-ports" >}}).
+The cache cluster uses a random port of the [external service port range]().
Use this port number to connect to the Redis instance like so:
-{{< command >}}
-$ redis-cli -p 4510 ping
+```bash
+redis-cli -p 4510 ping
PONG
-$ redis-cli -p 4510 set foo bar
+redis-cli -p 4510 set foo bar
OK
-$ redis-cli -p 4510 get foo
+redis-cli -p 4510 get foo
"bar"
-{{< / command >}}
+```
### Replication groups in non-cluster mode
-{{< command >}}
-$ awslocal elasticache create-replication-group \
+```bash
+awslocal elasticache create-replication-group \
--replication-group-id my-redis-replication-group \
--replication-group-description 'my replication group' \
--engine redis \
--cache-node-type cache.t2.micro \
--num-cache-clusters 3
-{{< /command >}}
+```
Wait for it to be available.
When running the following command, you should see one node group when running:
-{{< command >}}
-$ awslocal elasticache describe-replication-groups --replication-group-id my-redis-replication-group
-{{< /command >}}
+```bash
+awslocal elasticache describe-replication-groups --replication-group-id my-redis-replication-group
+```
To retrieve the primary endpoint:
-{{< command >}}
-$ awslocal elasticache describe-replication-groups --replication-group-id my-redis-replication-group \
+```bash
+awslocal elasticache describe-replication-groups --replication-group-id my-redis-replication-group \
--query "ReplicationGroups[0].NodeGroups[0].PrimaryEndpoint"
-{{< /command >}}
+```
### Replication groups in cluster mode
The cluster mode is enabled by using `--num-node-groups` and `--replicas-per-node-group`:
-{{< command >}}
-$ awslocal elasticache create-replication-group \
+```bash
+awslocal elasticache create-replication-group \
--engine redis \
--replication-group-id my-clustered-redis-replication-group \
--replication-group-description 'my clustered replication group' \
--cache-node-type cache.t2.micro \
--num-node-groups 2 \
--replicas-per-node-group 2
-{{< /command >}}
+```
Note that the group nodes do not have a primary endpoint.
Instead they have a `ConfigurationEndpoint`, which you can connect to using `redis-cli -c` where `-c` is for cluster mode.
-{{< command >}}
-$ awslocal elasticache describe-replication-groups --replication-group-id my-clustered-redis-replication-group \
+```bash
+awslocal elasticache describe-replication-groups --replication-group-id my-clustered-redis-replication-group \
--query "ReplicationGroups[0].ConfigurationEndpoint"
-{{< /command >}}
+```
## Container mode
@@ -119,11 +123,11 @@ You can access the Resource Browser by opening the LocalStack Web Application in
In the ElastiCache resource browser you can:
* List and remove existing cache clusters
- {{< img src="elasticache-resource-browser-list.png" alt="Create a ElastiCache cluster in the resource browser" >}}
+ 
* View details of cache clusters
- {{< img src="elasticache-resource-browser-show.png" alt="Create a ElastiCache cluster in the resource browser" >}}
+ 
* Create new cache clusters
- {{< img src="elasticache-resource-browser-create.png" alt="Create a ElastiCache cluster in the resource browser" >}}
+ 
## Current Limitations
diff --git a/src/content/docs/aws/services/elasticbeanstalk.md b/src/content/docs/aws/services/elasticbeanstalk.md
index 95950ae4..937a7ff8 100644
--- a/src/content/docs/aws/services/elasticbeanstalk.md
+++ b/src/content/docs/aws/services/elasticbeanstalk.md
@@ -1,8 +1,6 @@
---
title: "Elastic Beanstalk"
-linkTitle: "Elastic Beanstalk"
-description: >
- Get started with Elastic Beanstalk (EB) on LocalStack
+description: Get started with Elastic Beanstalk (EB) on LocalStack
tags: ["Ultimate"]
---
@@ -13,7 +11,7 @@ Elastic Beanstalk orchestrates various AWS services, including EC2, S3, SNS, and
Elastic Beanstalk also supports various application environments, such as Java, .NET, Node.js, PHP, Python, Ruby, Go, and Docker.
LocalStack allows you to use the Elastic Beanstalk APIs in your local environment to create and manage applications, environments and versions.
-The supported APIs are available on our [API coverage page]({{< ref "coverage_elasticbeanstalk" >}}), which provides information on the extent of Elastic Beanstalk's integration with LocalStack.
+The supported APIs are available on our [API coverage page](), which provides information on the extent of Elastic Beanstalk's integration with LocalStack.
## Getting started
@@ -27,10 +25,10 @@ We will demonstrate how to create an Elastic Beanstalk application and environme
To create an Elastic Beanstalk application, you can use the [`CreateApplication`](https://docs.aws.amazon.com/elasticbeanstalk/latest/api/API_CreateApplication.html) API.
Run the following command to create an application named `my-app`:
-{{< command >}}
-$ awslocal elasticbeanstalk create-application \
+```bash
+awslocal elasticbeanstalk create-application \
--application-name my-app
-{{< /command >}}
+```
The following output would be retrieved:
@@ -47,21 +45,21 @@ The following output would be retrieved:
You can also use the [`DescribeApplications`](https://docs.aws.amazon.com/elasticbeanstalk/latest/api/API_DescribeApplications.html) API to retrieve information about your application.
Run the following command to retrieve information about the `my-app` application, we created earlier:
-{{< command >}}
-$ awslocal elasticbeanstalk describe-applications \
+```bash
+awslocal elasticbeanstalk describe-applications \
--application-names my-app
-{{< /command >}}
+```
### Create an environment
To create an Elastic Beanstalk environment, you can use the [`CreateEnvironment`](https://docs.aws.amazon.com/elasticbeanstalk/latest/api/API_CreateEnvironment.html) API.
Run the following command to create an environment named `my-environment`:
-{{< command >}}
-$ awslocal elasticbeanstalk create-environment \
+```bash
+awslocal elasticbeanstalk create-environment \
--application-name my-app \
--environment-name my-environment
-{{< /command >}}
+```
The following output would be retrieved:
@@ -78,21 +76,21 @@ The following output would be retrieved:
You can also use the [`DescribeEnvironments`](https://docs.aws.amazon.com/elasticbeanstalk/latest/api/API_DescribeEnvironments.html) API to retrieve information about your environment.
Run the following command to retrieve information about the `my-environment` environment, we created earlier:
-{{< command >}}
-$ awslocal elasticbeanstalk describe-environments \
+```bash
+awslocal elasticbeanstalk describe-environments \
--environment-names my-environment
-{{< /command >}}
+```
### Create an application version
To create an Elastic Beanstalk application version, you can use the [`CreateApplicationVersion`](https://docs.aws.amazon.com/elasticbeanstalk/latest/api/API_CreateApplicationVersion.html) API.
Run the following command to create an application version named `v1`:
-{{< command >}}
-$ awslocal elasticbeanstalk create-application-version \
+```bash
+awslocal elasticbeanstalk create-application-version \
--application-name my-app \
--version-label v1
-{{< /command >}}
+```
The following output would be retrieved:
@@ -110,10 +108,10 @@ The following output would be retrieved:
You can also use the [`DescribeApplicationVersions`](https://docs.aws.amazon.com/elasticbeanstalk/latest/api/API_DescribeApplicationVersions.html) API to retrieve information about your application version.
Run the following command to retrieve information about the `v1` application version, we created earlier:
-{{< command >}}
-$ awslocal elasticbeanstalk describe-application-versions \
+```bash
+awslocal elasticbeanstalk describe-application-versions \
--application-name my-app
-{{< /command >}}
+```
## Current Limitations
diff --git a/src/content/docs/aws/services/elastictranscoder.md b/src/content/docs/aws/services/elastictranscoder.md
index 4094f454..af826998 100644
--- a/src/content/docs/aws/services/elastictranscoder.md
+++ b/src/content/docs/aws/services/elastictranscoder.md
@@ -1,6 +1,5 @@
---
title: "Elastic Transcoder"
-linkTitle: "Elastic Transcoder"
description: Get started with Elastic Transcoder on LocalStack
tags: ["Base"]
---
@@ -12,7 +11,7 @@ Elastic Transcoder manages the underlying resources, ensuring high availability
It also supports a wide range of input and output formats, enabling users to efficiently process and deliver video content at scale.
LocalStack allows you to mock the Elastic Transcoder APIs in your local environment.
-The supported APIs are available on our [API coverage page]({{< ref "coverage_elastictranscoder" >}}), which provides information on the extent of Elastic Transcoder's integration with LocalStack.
+The supported APIs are available on our [API coverage page](), which provides information on the extent of Elastic Transcoder's integration with LocalStack.
## Getting started
@@ -26,23 +25,23 @@ We will demonstrate how to create an Elastic Transcoder pipeline, read the pipel
You can create S3 buckets using the [`mb`](https://docs.aws.amazon.com/cli/latest/reference/s3/mb.html) API.
Execute the following command to create two buckets named `elasticbucket` and `outputbucket`:
-{{< command >}}
-$ awslocal s3 mb s3://elasticbucket
-$ awslocal s3 mb s3://outputbucket
-{{< /command >}}
+```bash
+awslocal s3 mb s3://elasticbucket
+awslocal s3 mb s3://outputbucket
+```
### Create an Elastic Transcoder pipeline
You can create an Elastic Transcoder pipeline using the [`CreatePipeline`](https://docs.aws.amazon.com/elastictranscoder/latest/developerguide/create-pipeline.html) API.
Execute the following command to create a pipeline named `test-pipeline`:
-{{< command >}}
-$ awslocal elastictranscoder create-pipeline \
+```bash
+awslocal elastictranscoder create-pipeline \
--name Default \
--input-bucket elasticbucket \
--output-bucket outputbucket \
--role arn:aws:iam::000000000000:role/Elastic_Transcoder_Default_Role
-{{< /command >}}
+```
The following output would be retrieved:
@@ -80,9 +79,9 @@ The following output would be retrieved:
You can list all pipelines using the [`ListPipelines`](https://docs.aws.amazon.com/elastictranscoder/latest/developerguide/list-pipelines.html) API.
Execute the following command to list all pipelines:
-{{< command >}}
-$ awslocal elastictranscoder list-pipelines
-{{< /command >}}
+```bash
+awslocal elastictranscoder list-pipelines
+```
The following output would be retrieved:
@@ -121,9 +120,9 @@ The following output would be retrieved:
You can read a pipeline using the [`ReadPipeline`](https://docs.aws.amazon.com/elastictranscoder/latest/developerguide/read-pipeline.html) API.
Execute the following command to read the pipeline with the ID `0998507242379-vltecz`:
-{{< command >}}
-$ awslocal elastictranscoder read-pipeline --id 0998507242379-vltecz
-{{< /command >}}
+```bash
+awslocal elastictranscoder read-pipeline --id 0998507242379-vltecz
+```
The following output would be retrieved:
diff --git a/src/content/docs/aws/services/elb.md b/src/content/docs/aws/services/elb.md
index ba65a550..e17eda9a 100644
--- a/src/content/docs/aws/services/elb.md
+++ b/src/content/docs/aws/services/elb.md
@@ -1,6 +1,5 @@
---
title: "Elastic Load Balancing (ELB)"
-linkTitle: "Elastic Load Balancing (ELB)"
description: Get started with Elastic Load Balancing (ELB) on LocalStack
tags: ["Base"]
---
@@ -12,7 +11,7 @@ It also monitors the health of its registered targets and ensures that it routes
You can check [the official AWS documentation](https://docs.aws.amazon.com/elasticloadbalancing/latest/userguide/what-is-load-balancing.html) to understand the basic terms and concepts used in the ELB.
Localstack allows you to use the Elastic Load Balancing APIs in your local environment to create, edit, and view load balancers, target groups, listeners, and rules.
-The supported APIs are available on our [API coverage page]({{< ref "coverage_elbv2" >}}), which provides information on the extent of ELB's integration with LocalStack.
+The supported APIs are available on our [API coverage page](), which provides information on the extent of ELB's integration with LocalStack.
## Getting started
@@ -25,90 +24,90 @@ We will demonstrate how to create an Application Load Balancer, along with its t
Launch an HTTP server which will serve as the target for our load balancer.
-{{< command >}}
-$ docker run --rm -itd -p 5678:80 ealen/echo-server
-{{< /command >}}
+```bash
+docker run --rm -itd -p 5678:80 ealen/echo-server
+```
### Create a load balancer
To specify the subnet and VPC in which the load balancer will be created, you can use the [`DescribeSubnets`](https://docs.aws.amazon.com/elasticloadbalancing/latest/APIReference/API_DescribeSubnets.html) API to retrieve the subnet ID and VPC ID.
In this example, we will use the subnet and VPC in the `us-east-1f` availability zone.
-{{< command >}}
-$ subnet_info=$(awslocal ec2 describe-subnets --filters Name=availability-zone,Values=us-east-1f \
+```bash
+subnet_info=$(awslocal ec2 describe-subnets --filters Name=availability-zone,Values=us-east-1f \
| jq -r '.Subnets[] | select(.AvailabilityZone == "us-east-1f") | {SubnetId: .SubnetId, VpcId: .VpcId}')
-$ subnet_id=$(echo $subnet_info | jq -r '.SubnetId')
+subnet_id=$(echo $subnet_info | jq -r '.SubnetId')
-$ vpc_id=$(echo $subnet_info | jq -r '.VpcId')
-{{< /command >}}
+vpc_id=$(echo $subnet_info | jq -r '.VpcId')
+```
To create a load balancer, you can use the [`CreateLoadBalancer`](https://docs.aws.amazon.com/elasticloadbalancing/latest/APIReference/API_CreateLoadBalancer.html) API.
The following command creates an Application Load Balancer named `example-lb`:
-{{< command >}}
-$ loadBalancer=$(awslocal elbv2 create-load-balancer --name example-lb \
+```bash
+loadBalancer=$(awslocal elbv2 create-load-balancer --name example-lb \
--subnets $subnet_id | jq -r '.LoadBalancers[]|.LoadBalancerArn')
-{{< /command >}}
+```
### Create a target group
To create a target group, you can use the [`CreateTargetGroup`](https://docs.aws.amazon.com/elasticloadbalancing/latest/APIReference/API_CreateTargetGroup.html) API.
The following command creates a target group named `example-target-group`:
-{{< command >}}
-$ targetGroup=$(awslocal elbv2 create-target-group --name example-target-group \
+```bash
+targetGroup=$(awslocal elbv2 create-target-group --name example-target-group \
--protocol HTTP --target-type ip --port 80 --vpc-id $vpc_id \
| jq -r '.TargetGroups[].TargetGroupArn')
-{{< /command >}}
+```
### Register a target
To register a target, you can use the [`RegisterTargets`](https://docs.aws.amazon.com/elasticloadbalancing/latest/APIReference/API_RegisterTargets.html) API.
The following command registers the target with the target group created in the previous step:
-{{< command >}}
-$ awslocal elbv2 register-targets --targets Id=127.0.0.1,Port=5678,AvailabilityZone=all \
+```bash
+awslocal elbv2 register-targets --targets Id=127.0.0.1,Port=5678,AvailabilityZone=all \
--target-group-arn $targetGroup
-{{< /command >}}
+```
-{{< callout >}}
+:::note
Note that in some cases the `targets` parameter `Id` can be the `Gateway` address of the docker container.
You can find the gateway address by running `docker inspect `.
-{{< /callout >}}
+:::
### Create a listener and a rule
We create a listener for the load balancer using the [`CreateListener`](https://docs.aws.amazon.com/elasticloadbalancing/latest/APIReference/API_CreateListener.html) API.
The following command creates a listener for the load balancer created in the previous step:
-{{< command >}}
-$ listenerArn=$(awslocal elbv2 create-listener \
+```bash
+listenerArn=$(awslocal elbv2 create-listener \
--protocol HTTP \
--port 80 \
--default-actions '{"Type":"forward","TargetGroupArn":"'$targetGroup'","ForwardConfig":{"TargetGroups":[{"TargetGroupArn":"'$targetGroup'","Weight":11}]}}' \
--load-balancer-arn $loadBalancer | jq -r '.Listeners[]|.ListenerArn')
-{{< /command >}}
+```
To create a rule for the listener, you can use the [`CreateRule`](https://docs.aws.amazon.com/elasticloadbalancing/latest/APIReference/API_CreateRule.html) API.
The following command creates a rule for the listener created above:
-{{< command >}}
-$ listenerRule=$(awslocal elbv2 create-rule \
+```bash
+listenerRule=$(awslocal elbv2 create-rule \
--conditions Field=path-pattern,Values=/ \
--priority 1 \
--actions '{"Type":"forward","TargetGroupArn":"'$targetGroup'","ForwardConfig":{"TargetGroups":[{"TargetGroupArn":"'$targetGroup'","Weight":11}]}}' \
--listener-arn $listenerArn \
| jq -r '.Rules[].RuleArn')
-{{< /command >}}
+```
### Send a request to the load balancer
Finally, you can issue an HTTP request to the `DNSName` parameter of `CreateLoadBalancer` operation, and `Port` parameter of `CreateListener` command with the following command:
-{{< command >}}
-$ curl example-lb.elb.localhost.localstack.cloud:4566
-{{< /command >}}
+```bash
+curl example-lb.elb.localhost.localstack.cloud:4566
+```
The following output will be retrieved:
@@ -175,7 +174,7 @@ http(s)://localhost.localstack.cloud:4566/_aws/elb/example-lb/test/path
The following code snippets and sample applications provide practical examples of how to use ELB in LocalStack for various use cases:
-- [Setting up Elastic Load Balancing (ELB) Application Load Balancers using LocalStack, deployed via the Serverless framework]({{< ref "/tutorials/elb-load-balancing" >}})
+- [Setting up Elastic Load Balancing (ELB) Application Load Balancers using LocalStack, deployed via the Serverless framework]()
## Current Limitations
diff --git a/src/content/docs/aws/services/elementalmediaconvert.md b/src/content/docs/aws/services/elementalmediaconvert.md
index 662330d7..99116804 100644
--- a/src/content/docs/aws/services/elementalmediaconvert.md
+++ b/src/content/docs/aws/services/elementalmediaconvert.md
@@ -1,6 +1,5 @@
---
title: "Elemental MediaConvert"
-linkTitle: "Elemental MediaConvert"
description: Get started with Elemental MediaConvert on LocalStack
tags: ["Ultimate"]
---
@@ -11,11 +10,11 @@ Elemental MediaConvert is a file-based video transcoding service with broadcast-
It enables you to easily create high-quality video streams for broadcast and multiscreen delivery.
LocalStack allows you to mock the MediaConvert APIs in your local environment.
-The supported APIs are available on our [API coverage page]({{< ref "coverage_mediaconvert" >}}), which provides information on the extent of MediaConvert's integration with LocalStack.
+The supported APIs are available on our [API coverage page](), which provides information on the extent of MediaConvert's integration with LocalStack.
-{{< callout "note">}}
+:::note
Elemental MediaConvert is in a preview state.
-{{< /callout >}}
+:::
## Getting started
@@ -98,9 +97,9 @@ Create a new file named `job.json` on your local directory:
You can create a MediaConvert job using the [`CreateJob`](https://docs.aws.amazon.com/goto/WebAPI/mediaconvert-2017-08-29/CreateJob) API.
Execute the following command to create a job using a `job.json` file:
-{{< command >}}
-$ awslocal mediaconvert create-job --cli-input-json file://job.json
-{{< /command >}}
+```bash
+awslocal mediaconvert create-job --cli-input-json file://job.json
+```
The following output would be retrieved:
@@ -148,20 +147,20 @@ The following output would be retrieved:
You can list all MediaConvert jobs using the [`ListJobs`](https://docs.aws.amazon.com/mediaconvert/latest/apireference/jobs.html#jobsget) API.
Execute the following command to list all jobs:
-{{< command >}}
-$ awslocal mediaconvert list-jobs
-{{< /command >}}
+```bash
+awslocal mediaconvert list-jobs
+```
### Create a queue
You can create a MediaConvert queue using the [`CreateQueue`](https://docs.aws.amazon.com/mediaconvert/latest/apireference/queues.html#queuespost) API.
Execute the following command to create a queue named `MyQueue`:
-{{< command >}}
-$ awslocal mediaconvert create-queue
+```bash
+awslocal mediaconvert create-queue
--name MyQueue
--description "High priority queue for video encoding"
-{{< /command >}}
+```
The following output would be retrieved:
@@ -187,9 +186,9 @@ The following output would be retrieved:
You can list all MediaConvert queues using the [`ListQueues`](https://docs.aws.amazon.com/mediaconvert/latest/apireference/queues.html#queuesget) API.
Execute the following command to list all queues:
-{{< command >}}
-$ awslocal mediaconvert list-queues
-{{< /command >}}
+```bash
+awslocal mediaconvert list-queues
+```
## Current Limitations
diff --git a/src/content/docs/aws/services/emr.md b/src/content/docs/aws/services/emr.md
index 1553dd4e..4b1dd54a 100644
--- a/src/content/docs/aws/services/emr.md
+++ b/src/content/docs/aws/services/emr.md
@@ -1,9 +1,7 @@
---
title: "Elastic MapReduce (EMR)"
-linkTitle: "Elastic MapReduce (EMR)"
tags: ["Ultimate"]
-description: >
- Get started with Elastic MapReduce (EMR) on LocalStack
+description: Get started with Elastic MapReduce (EMR) on LocalStack
---
## Introduction
@@ -16,13 +14,13 @@ LocalStack supports EMR and allows developers to run data analytics workloads lo
EMR utilizes various tools in the [Hadoop](https://hadoop.apache.org/) and [Spark](https://spark.apache.org) ecosystem, and your EMR instance is automatically configured to connect seamlessly to LocalStack's S3 API.
LocalStack also supports EMR Serverless to create applications and job runs, to run your Spark/PySpark jobs locally.
-The supported APIs are available on our [API coverage page]({{ ref "coverage_emr" >}}), which provides information on the extent of EMR's integration with LocalStack.
+The supported APIs are available on our [API coverage page](), which provides information on the extent of EMR's integration with LocalStack.
-{{< callout >}}
+:::note
To utilize the EMR API, certain additional dependencies need to be downloaded from the network (including Hadoop, Hive, Spark, etc).
These dependencies are fetched automatically during service startup, hence it is important to ensure a reliable internet connection when retrieving the dependencies for the first time.
-Alternatively, you can use one of our `*-bigdata` Docker image tags which already ship with the required libraries baked in and may provide better stability (see [here]({{< ref "/user-guide/ci/#ci-images" >}}) for more details).
-{{< /callout >}}
+Alternatively, you can use one of our `*-bigdata` Docker image tags which already ship with the required libraries baked in and may provide better stability (see [here]() for more details).
+:::
## Getting started
@@ -32,14 +30,15 @@ Start your LocalStack container using your preferred method.
We will create a virtual EMR cluster using the AWS CLI.
To create an EMR cluster, run the following command:
-{{< command >}}
-$ awslocal emr create-cluster \
+```bash
+awslocal emr create-cluster \
--release-label emr-5.9.0 \
--instance-groups InstanceGroupType=MASTER,InstanceCount=1,InstanceType=m4.large InstanceGroupType=CORE,InstanceCount=1,InstanceType=m4.large
-{{< / command >}}
+```
+
You will see a response similar to the following:
-```sh
+```bash
{
"ClusterId": "j-A2KF3EKLAOWRI"
}
diff --git a/src/content/docs/aws/services/es.md b/src/content/docs/aws/services/es.md
index 1d0d903b..d397f668 100644
--- a/src/content/docs/aws/services/es.md
+++ b/src/content/docs/aws/services/es.md
@@ -1,25 +1,30 @@
---
title: "Elasticsearch Service"
-linkTitle: "Elasticsearch Service"
-description: >
- Get started with Amazon Elasticsearch Service (ES) on LocalStack
+description: Get started with Amazon Elasticsearch Service (ES) on LocalStack
tags: ["Free"]
---
+## Introduction
+
The Elasticsearch Service in LocalStack lets you create one or more single-node Elasticsearch/OpenSearch cluster that behaves like the [Amazon Elasticsearch Service](https://aws.amazon.com/opensearch-service/the-elk-stack/what-is-elasticsearch/).
This service is, like its AWS counterpart, heavily linked with the [OpenSearch Service](../opensearch).
Any cluster created with the Elasticsearch Service will show up in the OpenSearch Service and vice versa.
## Creating an Elasticsearch cluster
-You can go ahead and use [awslocal]({{< ref "aws-cli.md#localstack-aws-cli-awslocal" >}}) to create a new elasticsearch domain via the `aws es create-elasticsearch-domain` command.
+You can go ahead and use [`awslocal`](https://github.com/localstack/awscli-local) to create a new elasticsearch domain via the `aws es create-elasticsearch-domain` command.
-{{< callout >}}
+:::note
Unless you use the Elasticsearch default version, the first time you create a cluster with a specific version, the Elasticsearch binary is downloaded, which may take a while to download.
-{{< /callout >}}
+:::
+
+```bash
+awslocal es create-elasticsearch-domain --domain-name my-domain
+```
-{{< command >}}
-$ awslocal es create-elasticsearch-domain --domain-name my-domain
+The following output would be retrieved:
+
+```json
{
"DomainStatus": {
"DomainId": "000000000000/my-domain",
@@ -49,11 +54,11 @@ $ awslocal es create-elasticsearch-domain --domain-name my-domain
}
}
}
-{{< / command >}}
+```
In the LocalStack log you will see something like the following, where you can see the cluster starting up in the background.
-```plaintext
+```bash
2021-11-08T16:29:28:INFO:localstack.services.es.cluster: starting elasticsearch: /opt/code/localstack/localstack/localstack/infra/elasticsearch/bin/elasticsearch -E http.port=57705 -E http.publish_port=57705 -E transport.port=0 -E network.host=127.0.0.1 -E http.compression=false -E path.data="/var/lib/localstack/lib//elasticsearch/arn:aws:es:us-east-1:000000000000:domain/my-domain/data" -E path.repo="/var/lib/localstack/lib//elasticsearch/arn:aws:es:us-east-1:000000000000:domain/my-domain/backup" -E xpack.ml.enabled=false with env {'ES_JAVA_OPTS': '-Xms200m -Xmx600m', 'ES_TMPDIR': '/var/lib/localstack/lib//elasticsearch/arn:aws:es:us-east-1:000000000000:domain/my-domain/tmp'}
2021-11-08T16:29:28:INFO:localstack.services.es.cluster: registering an endpoint proxy for http://my-domain.us-east-1.es.localhost.localstack.cloud:4566 => http://127.0.0.1:57705
2021-11-08T16:29:30:INFO:localstack.services.es.cluster: OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.
@@ -68,10 +73,16 @@ In the LocalStack log you will see something like the following, where you can s
and after some time, you should see that the `Processing` state of the domain is set to `false`:
-{{< command >}}
-$ awslocal es describe-elasticsearch-domain --domain-name my-domain | jq ".DomainStatus.Processing"
+```bash
+awslocal es describe-elasticsearch-domain --domain-name my-domain | jq ".DomainStatus.Processing"
+```
+
+The following output would be retrieved:
+
+```bash
false
-{{< / command >}}
+```
+
## Interact with the cluster
@@ -80,8 +91,13 @@ in this case `http://my-domain.us-east-1.es.localhost.localstack.cloud:4566`.
For example:
-{{< command >}}
-$ curl http://my-domain.us-east-1.es.localhost.localstack.cloud:4566
+```bash
+curl http://my-domain.us-east-1.es.localhost.localstack.cloud:4566
+```
+
+The following output would be retrieved:
+
+```json
{
"name" : "localstack",
"cluster_name" : "elasticsearch",
@@ -99,12 +115,17 @@ $ curl http://my-domain.us-east-1.es.localhost.localstack.cloud:4566
},
"tagline" : "You Know, for Search"
}
-{{< / command >}}
+```
Or the health endpoint:
-{{< command >}}
-$ curl -s http://my-domain.us-east-1.es.localhost.localstack.cloud:4566/_cluster/health | jq .
+```bash
+curl -s http://my-domain.us-east-1.es.localhost.localstack.cloud:4566/_cluster/health | jq .
+```
+
+The following output would be retrieved:
+
+```json
{
"cluster_name": "elasticsearch",
"status": "green",
@@ -122,7 +143,7 @@ $ curl -s http://my-domain.us-east-1.es.localhost.localstack.cloud:4566/_cluster
"task_max_waiting_in_queue_millis": 0,
"active_shards_percent_as_number": 100
}
-{{< / command >}}
+```
## Advanced topics
@@ -134,7 +155,7 @@ There are three configurable strategies that govern how domain endpoints are cre
| - | - | - |
| `domain` | `..es.localhost.localstack.cloud:4566` | This is the default strategy that uses the `localhost.localstack.cloud` domain to route to your localhost |
| `path` | `localhost:4566/es//` | An alternative that can be useful if you cannot resolve LocalStack's localhost domain |
-| `port` | `localhost:` | Exposes the cluster(s) directly with ports from the [external service port range]({{< ref "external-ports" >}})|
+| `port` | `localhost:` | Exposes the cluster(s) directly with ports from the [external service port range]()|
| `off` | | *Deprecated*. This value now reverts to the `port` setting, using a port from the given range instead of `4571` |
Regardless of the service from which the clusters were created, the domain of the cluster always corresponds to the engine type (OpenSearch or Elasticsearch) of the cluster.
@@ -146,17 +167,17 @@ LocalStack allows you to set arbitrary custom endpoints for your clusters in the
This can be used to overwrite the behavior of the endpoint strategies described above.
You can also choose custom domains, however it is important to add the edge port (`80`/`443` or by default `4566`).
-{{< command >}}
-$ awslocal es create-elasticsearch-domain --domain-name my-domain \
+```bash
+awslocal es create-elasticsearch-domain --domain-name my-domain \
--elasticsearch-version 7.10 \
--domain-endpoint-options '{ "CustomEndpoint": "http://localhost:4566/my-custom-endpoint", "CustomEndpointEnabled": true }'
-{{< / command >}}
+```
Once the domain processing is complete, you can access the cluster:
-{{< command >}}
-$ curl http://localhost:4566/my-custom-endpoint/_cluster/health
-{{< / command >}}
+```bash
+curl http://localhost:4566/my-custom-endpoint/_cluster/health
+```
### Re-using a single cluster instance
@@ -244,64 +265,78 @@ volumes:
```
1. Run docker compose:
-{{< command >}}
-$ docker-compose up -d
-{{< /command >}}
+ ```bash
+ docker-compose up -d
+ ```
2. Create the Elasticsearch domain:
-{{< command >}}
-$ awslocal es create-elasticsearch-domain \
- --domain-name mylogs-2 \
- --elasticsearch-version 7.10 \
- --elasticsearch-cluster-config '{ "InstanceType": "m3.xlarge.elasticsearch", "InstanceCount": 4, "DedicatedMasterEnabled": true, "ZoneAwarenessEnabled": true, "DedicatedMasterType": "m3.xlarge.elasticsearch", "DedicatedMasterCount": 3}'
-{
- "DomainStatus": {
- "DomainId": "000000000000/mylogs-2",
- "DomainName": "mylogs-2",
- "ARN": "arn:aws:es:us-east-1:000000000000:domain/mylogs-2",
- "Created": true,
- "Deleted": false,
- "Endpoint": "mylogs-2.us-east-1.es.localhost.localstack.cloud:4566",
- "Processing": true,
- "ElasticsearchVersion": "7.10",
- "ElasticsearchClusterConfig": {
- "InstanceType": "m3.xlarge.elasticsearch",
- "InstanceCount": 4,
- "DedicatedMasterEnabled": true,
- "ZoneAwarenessEnabled": true,
- "DedicatedMasterType": "m3.xlarge.elasticsearch",
- "DedicatedMasterCount": 3
- },
- "EBSOptions": {
- "EBSEnabled": true,
- "VolumeType": "gp2",
- "VolumeSize": 10,
- "Iops": 0
- },
- "CognitoOptions": {
- "Enabled": false
+ ```bash
+ awslocal es create-elasticsearch-domain \
+ --domain-name mylogs-2 \
+ --elasticsearch-version 7.10 \
+ --elasticsearch-cluster-config '{ "InstanceType": "m3.xlarge.elasticsearch", "InstanceCount": 4, "DedicatedMasterEnabled": true, "ZoneAwarenessEnabled": true, "DedicatedMasterType": "m3.xlarge.elasticsearch", "DedicatedMasterCount": 3}'
+ ```
+
+ The following output would be retrieved:
+
+ ```json
+ {
+ "DomainStatus": {
+ "DomainId": "000000000000/mylogs-2",
+ "DomainName": "mylogs-2",
+ "ARN": "arn:aws:es:us-east-1:000000000000:domain/mylogs-2",
+ "Created": true,
+ "Deleted": false,
+ "Endpoint": "mylogs-2.us-east-1.es.localhost.localstack.cloud:4566",
+ "Processing": true,
+ "ElasticsearchVersion": "7.10",
+ "ElasticsearchClusterConfig": {
+ "InstanceType": "m3.xlarge.elasticsearch",
+ "InstanceCount": 4,
+ "DedicatedMasterEnabled": true,
+ "ZoneAwarenessEnabled": true,
+ "DedicatedMasterType": "m3.xlarge.elasticsearch",
+ "DedicatedMasterCount": 3
+ },
+ "EBSOptions": {
+ "EBSEnabled": true,
+ "VolumeType": "gp2",
+ "VolumeSize": 10,
+ "Iops": 0
+ },
+ "CognitoOptions": {
+ "Enabled": false
+ }
}
}
-}
-{{< /command >}}
+ ```
-3. If the `Processing` status is true, it means that the cluster is not yet healthy.
- You can run `describe-elasticsearch-domain` to receive the status:
-{{< command >}}
-$ awslocal es describe-elasticsearch-domain --domain-name mylogs-2
-{{< /command >}}
+3. If the `Processing` status is true, it means that the cluster is not yet healthy. You can run `describe-elasticsearch-domain` to receive the status:
+ ```bash
+ awslocal es describe-elasticsearch-domain --domain-name mylogs-2
+ ```
4. Check the cluster health endpoint and create indices:
-{{< command >}}
-$ curl mylogs-2.us-east-1.es.localhost.localstack.cloud:4566/_cluster/health
-{"cluster_name":"es-docker-cluster","status":"green","timed_out":false,"number_of_nodes":1,"number_of_data_nodes":1,"active_primary_shards":0,"active_shards":0,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":0,"delayed_unassigned_shards":0,"number_of_pending_tasks":0,"number_of_in_flight_fetch":0,"task_max_waiting_in_queue_millis":0,"active_shards_percent_as_number":100.0}[~]
-{{< /command >}}
+ ```bash
+ curl mylogs-2.us-east-1.es.localhost.localstack.cloud:4566/_cluster/health
+ ```
+
+ The following output would be retrieved:
+
+ ```bash
+ {"cluster_name":"es-docker-cluster","status":"green","timed_out":false,"number_of_nodes":1,"number_of_data_nodes":1,"active_primary_shards":0,"active_shards":0,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":0,"delayed_unassigned_shards":0,"number_of_pending_tasks":0,"number_of_in_flight_fetch":0,"task_max_waiting_in_queue_millis":0,"active_shards_percent_as_number":100.0}[~]
+ ```
5. Create an example index:
-{{< command >}}
-$ curl -X PUT mylogs-2.us-east-1.es.localhost.localstack.cloud:4566/my-index
-{"acknowledged":true,"shards_acknowledged":true,"index":"my-index"}
-{{< /command >}}
+ ```bash
+ curl -X PUT mylogs-2.us-east-1.es.localhost.localstack.cloud:4566/my-index
+ ```
+
+ The following output would be retrieved:
+
+ ```bash
+ {"acknowledged":true,"shards_acknowledged":true,"index":"my-index"}
+ ```
## Differences to AWS
diff --git a/src/content/docs/aws/services/events.md b/src/content/docs/aws/services/events.md
index b3c943f5..74a82d07 100644
--- a/src/content/docs/aws/services/events.md
+++ b/src/content/docs/aws/services/events.md
@@ -1,6 +1,5 @@
---
title: "EventBridge"
-linkTitle: "EventBridge"
description: Get started with EventBridge on LocalStack
persistence: supported with limitations
tags: ["Free"]
@@ -14,12 +13,12 @@ EventBridge rules are tied to an Event Bus to manage event-driven workflows.
You can use either identity-based or resource-based policies to control access to EventBridge resources, where the former can be attached to IAM users, groups, and roles, and the latter can be attached to specific AWS resources.
LocalStack allows you to use the EventBridge APIs in your local environment to create rules that route events to a target.
-The supported APIs are available on our [API coverage page]({{< ref "coverage_events" >}}), which provides information on the extent of EventBridge's integration with LocalStack.
-For information on EventBridge Pipes, please refer to the [EventBridge Pipes]({{< ref "user-guide/aws/pipes" >}}) section.
+The supported APIs are available on our [API coverage page](), which provides information on the extent of EventBridge's integration with LocalStack.
+For information on EventBridge Pipes, please refer to the [EventBridge Pipes]() section.
-{{< callout >}}
+:::note
The native EventBridge provider, introduced in [LocalStack 3.5.0](https://discuss.localstack.cloud/t/localstack-release-v3-5-0/947), is now the default in 4.0. The legacy provider can still be enabled using the `PROVIDER_OVERRIDE_EVENTS=v1` configuration, but it is deprecated and will be removed in the next major release. We strongly recommend migrating to the new provider.
-{{< /callout >}}
+:::
## Getting Started
@@ -44,16 +43,16 @@ exports.handler = (event, context, callback) => {
Run the following command to create a new Lambda function using the [`CreateFunction`](https://docs.aws.amazon.com/cli/latest/reference/lambda/create-function.html) API:
-{{< command >}}
-$ zip function.zip index.js
+```bash
+zip function.zip index.js
-$ awslocal lambda create-function \
+awslocal lambda create-function \
--function-name events-example \
--runtime nodejs16.x \
--zip-file fileb://function.zip \
--handler index.handler \
--role arn:aws:iam::000000000000:role/cool-stacklifter
-{{< /command >}}
+```
The output will consist of the `FunctionArn`, which you will need to add the Lambda function to the EventBridge target.
@@ -61,25 +60,25 @@ The output will consist of the `FunctionArn`, which you will need to add the Lam
Run the following command to create a new EventBridge rule using the [`PutRule`](https://docs.aws.amazon.com/cli/latest/reference/events/put-rule.html) API:
-{{< command >}}
-$ awslocal events put-rule \
+```bash
+awslocal events put-rule \
--name my-scheduled-rule \
--schedule-expression 'rate(2 minutes)'
-{{< /command >}}
+```
In the above command, we have specified a schedule expression of `rate(2 minutes)`, which will run the rule every two minutes.
It means that the Lambda function will be invoked every two minutes.
Next, grant the EventBridge service principal (`events.amazonaws.com`) permission to run the rule, using the [`AddPermission`](https://docs.aws.amazon.com/cli/latest/reference/events/add-permission.html) API:
-{{< command >}}
-$ awslocal lambda add-permission \
+```bash
+awslocal lambda add-permission \
--function-name events-example \
--statement-id my-scheduled-event \
--action 'lambda:InvokeFunction' \
--principal events.amazonaws.com \
--source-arn arn:aws:events:us-east-1:000000000000:rule/my-scheduled-rule
-{{< /command >}}
+```
### Add the Lambda Function as a Target
@@ -96,11 +95,11 @@ Create a file named `targets.json` with the following content:
Finally, add the Lambda function as a target to the EventBridge rule using the [`PutTargets`](https://docs.aws.amazon.com/cli/latest/reference/events/put-targets.html) API:
-{{< command >}}
-$ awslocal events put-targets \
+```bash
+awslocal events put-targets \
--rule my-scheduled-rule \
--targets file://targets.json
-{{< /command >}}
+```
### Verify the Lambda invocation
@@ -109,27 +108,27 @@ However, wait at least 2 minutes after running the last command before checking
Run the following command to list the CloudWatch log groups:
-{{< command >}}
-$ awslocal logs describe-log-groups
-{{< /command >}}
+```bash
+awslocal logs describe-log-groups
+```
The output will contain the log group name, which you can use to list the log streams:
-{{< command >}}
-$ awslocal logs describe-log-streams \
+```bash
+awslocal logs describe-log-streams \
--log-group-name /aws/lambda/events-example
-{{< /command >}}
+```
Alternatively, you can fetch LocalStack logs to verify the Lambda invocation:
-{{< command >}}
-$ localstack logs
+```bash
+localstack logs
...
2023-07-17T09:37:52.028 INFO --- [ asgi_gw_0] localstack.request.aws : AWS lambda.Invoke => 202
2023-07-17T09:37:52.106 INFO --- [ asgi_gw_0] localstack.request.http : POST /_localstack_lambda/97e08ac50c18930f131d9dd9744b8df4/invocations/ecb744d0-b3f2-400f-9e49-c85cf12b1e00/logs => 202
2023-07-17T09:37:52.114 INFO --- [ asgi_gw_0] localstack.request.http : POST /_localstack_lambda/97e08ac50c18930f131d9dd9744b8df4/invocations/ecb744d0-b3f2-400f-9e49-c85cf12b1e00/response => 202
...
-{{< /command >}}
+```
## Supported target types
@@ -151,6 +150,8 @@ At this time LocalStack supports the following [target types](https://docs.aws.a
The LocalStack Web Application provides a Resource Browser for managing EventBridge Buses.
You can access the Resource Browser by opening the LocalStack Web Application in your browser, navigating to the **Resources** section, and then clicking on **EventBridge** under the **App Integration** section.
+
+
The Resource Browser allows you to perform the following actions:
- **View the Event Buses**: You can view the list of EventBridge Buses running locally, alongside their Amazon Resource Names (ARNs) and Policies.
diff --git a/src/content/docs/aws/services/firehose.md b/src/content/docs/aws/services/firehose.md
index e66924a7..fe50cb01 100644
--- a/src/content/docs/aws/services/firehose.md
+++ b/src/content/docs/aws/services/firehose.md
@@ -1,14 +1,12 @@
---
title: "Data Firehose"
-linkTitle: "Data Firehose"
-description: >
- Get started with Data Firehose on LocalStack
+description: Get started with Data Firehose on LocalStack
tags: ["Free"]
---
-{{< callout >}}
+:::note
This service was formerly called as 'Kinesis Data Firehose'.
-{{< /callout >}}
+:::
## Introduction
@@ -16,7 +14,7 @@ Data Firehose is a service provided by AWS that allows you to extract, transform
With Data Firehose, you can ingest and deliver real-time data from different sources as it automates data delivery, handles buffering and compression, and scales according to the data volume.
LocalStack allows you to use the Data Firehose APIs in your local environment to load and transform real-time data.
-The supported APIs are available on our [API coverage page]({{< ref "coverage_firehose" >}}), which provides information on the extent of Data Firehose's integration with LocalStack.
+The supported APIs are available on our [API coverage page](), which provides information on the extent of Data Firehose's integration with LocalStack.
## Getting started
@@ -30,9 +28,9 @@ We will demonstrate how to use Firehose to load Kinesis data into Elasticsearch
You can create an Elasticsearch domain using the [`create-elasticsearch-domain`](https://docs.aws.amazon.com/cli/latest/reference/es/create-elasticsearch-domain.html) command.
Execute the following command to create a domain named `es-local`:
-{{< command >}}
-$ awslocal es create-elasticsearch-domain --domain-name es-local
-{{< / command >}}
+```bash
+awslocal es create-elasticsearch-domain --domain-name es-local
+```
Save the value of the `Endpoint` field from the response, as it will be required further down to confirm the setup.
@@ -43,17 +41,17 @@ Now let us create our target S3 bucket and our source Kinesis stream:
Before creating the stream, we need to create an S3 bucket to store our backup data.
You can do this using the [`mb`](https://docs.aws.amazon.com/cli/latest/reference/s3/mb.html) command:
-{{< command >}}
-$ awslocal s3 mb s3://kinesis-activity-backup-local
-{{< / command >}}
+```bash
+awslocal s3 mb s3://kinesis-activity-backup-local
+```
You can now use the [`CreateStream`](https://docs.aws.amazon.com/kinesis/latest/APIReference/API_CreateStream.html) API to create a Kinesis stream named `kinesis-es-local-stream` with two shards:
-{{< command >}}
-$ awslocal kinesis create-stream \
+```bash
+awslocal kinesis create-stream \
--stream-name kinesis-es-local-stream \
--shard-count 2
-{{< / command >}}
+```
### Create a Firehouse delivery stream
@@ -64,20 +62,20 @@ Within the `kinesis-stream-source-configuration`, it is required to specify the
The `elasticsearch-destination-configuration` sets vital parameters, which includes the access role, `DomainARN` of the Elasticsearch domain where you wish to publish, and the settings including the `IndexName` and `TypeName` for the Elasticsearch setup.
Additionally to backup all documents to S3, the `S3BackupMode` parameter is set to `AllDocuments`, which is accompanied by `S3Configuration`.
-{{< callout >}}
+:::note
Within LocalStack's default configuration, IAM roles remain unverified and no strict validation is applied on ARNs.
However, when operating within the AWS environment, you need to check the access rights of the specified role for the task.
-{{< /callout >}}
+:::
You can use the [`CreateDeliveryStream`](https://docs.aws.amazon.com/firehose/latest/APIReference/API_CreateDeliveryStream.html) API to create a Firehose delivery stream named `activity-to-elasticsearch-local`:
-{{< command >}}
-$ awslocal firehose create-delivery-stream \
+```bash
+awslocal firehose create-delivery-stream \
--delivery-stream-name activity-to-elasticsearch-local \
--delivery-stream-type KinesisStreamAsSource \
--kinesis-stream-source-configuration "KinesisStreamARN=arn:aws:kinesis:us-east-1:000000000000:stream/kinesis-es-local-stream,RoleARN=arn:aws:iam::000000000000:role/Firehose-Reader-Role" \
--elasticsearch-destination-configuration "RoleARN=arn:aws:iam::000000000000:role/Firehose-Reader-Role,DomainARN=arn:aws:es:us-east-1:000000000000:domain/es-local,IndexName=activity,TypeName=activity,S3BackupMode=AllDocuments,S3Configuration={RoleARN=arn:aws:iam::000000000000:role/Firehose-Reader-Role,BucketARN=arn:aws:s3:::kinesis-activity-backup-local}"
-{{< / command >}}
+```
On successful execution, the command will return the `DeliveryStreamARN` of the created delivery stream:
@@ -93,10 +91,10 @@ Before testing the integration, it's necessary to confirm if the local Elasticse
You can use the [`describe-elasticsearch-domain`](https://docs.aws.amazon.com/cli/latest/reference/es/describe-elasticsearch-domain.html) command to check the status of the Elasticsearch cluster.
Run the following command:
-{{< command >}}
-$ awslocal es describe-elasticsearch-domain \
+```bash
+awslocal es describe-elasticsearch-domain \
--domain-name es-local | jq ".DomainStatus.Processing"
-{{< / command >}}
+```
Once the command returns `false`, you can move forward with data ingestion.
The data can be added to the source Kinesis stream or directly to the Firehose delivery stream.
@@ -104,32 +102,32 @@ The data can be added to the source Kinesis stream or directly to the Firehose d
You can add data to the Kinesis stream using the [`PutRecord`](https://docs.aws.amazon.com/kinesis/latest/APIReference/API_PutRecord.html) API.
The following command adds a record to the stream:
-{{< command >}}
-$ awslocal kinesis put-record \
+```bash
+awslocal kinesis put-record \
--stream-name kinesis-es-local-stream \
--data '{ "target": "barry" }' \
--partition-key partition
-{{< / command >}}
+```
-{{< callout "tip" >}}
+:::note
For users using AWS CLI v2, consider adding `--cli-binary-format raw-in-base64-out` to the command mentioned above.
-{{< /callout >}}
+:::
You can use the [`PutRecord`](https://docs.aws.amazon.com/firehose/latest/APIReference/API_PutRecord.html) API to add data to the Firehose delivery stream.
The following command adds a record to the stream:
-{{< command >}}
-$ awslocal firehose put-record \
+```bash
+awslocal firehose put-record \
--delivery-stream-name activity-to-elasticsearch-local \
--record '{ "Data": "eyJ0YXJnZXQiOiAiSGVsbG8gd29ybGQifQ==" }'
-{{< / command >}}
+```
To review the entries in Elasticsearch, you can employ [curl](https://curl.se/) for simplicity.
Remember to replace the URL with the `Endpoint` field from the initial `create-elasticsearch-domain` operation.
-{{< command >}}
-$ curl -s http://es-local.us-east-1.es.localhost.localstack.cloud:443/activity/_search | jq '.hits.hits'
-{{< / command >}}
+```bash
+curl -s http://es-local.us-east-1.es.localhost.localstack.cloud:443/activity/_search | jq '.hits.hits'
+```
You will get an output similar to the following:
diff --git a/src/content/docs/aws/services/fis.md b/src/content/docs/aws/services/fis.md
index 9f30687e..1e581b01 100644
--- a/src/content/docs/aws/services/fis.md
+++ b/src/content/docs/aws/services/fis.md
@@ -1,8 +1,6 @@
---
-title: "Fault Injection Service (FIS)"
-linkTitle: "Fault Injection Service (FIS)"
-description: >
- Get started with Fault Injection Service (FIS) on LocalStack
+title: Fault Injection Service (FIS)
+description: Get started with Fault Injection Service (FIS) on LocalStack
tags: ["Ultimate"]
---
@@ -13,11 +11,11 @@ FIS simulates faults such as resource unavailability and service errors to asses
The full list of such possible fault injections is available in the [AWS docs](https://docs.aws.amazon.com/fis/latest/userguide/fis-actions-reference.html).
LocalStack allows you to use the FIS APIs in your local environment to introduce faults in other services, in order to check how your setup behaves when parts of it stop working locally.
-The supported APIs are available on our [API coverage page]({{< ref "coverage_fis" >}}), which provides information on the extent of FIS API's integration with LocalStack.
+The supported APIs are available on our [API coverage page](), which provides information on the extent of FIS API's integration with LocalStack.
-{{< callout "tip" >}}
-LocalStack also features its own powerful chaos engineering tool, [Chaos API]({{< ref "chaos-api" >}}).
-{{< /callout >}}
+:::note
+LocalStack also features its own powerful chaos engineering tool, [Chaos API](/aws/capabilities/chaos-engineering/chaos-api).
+:::
## Concepts
@@ -30,10 +28,10 @@ FIS defines the following elements:
Together this is termed as an Experiment.
After the designated time, running experiments restore systems to their original state and cease introducing faults.
-{{< callout "note" >}}
+:::note
FIS experiment emulation is part of LocalStack Enterprise.
If you'd like to try it out, please [contact us](https://www.localstack.cloud/demo).
-{{< /callout >}}
+:::
FIS actions can be categorized into two main types:
@@ -89,9 +87,9 @@ Nonetheless, they are obligatory fields according to AWS specifications and must
Run the following command to create an FIS experiment template using the configuration file we just created:
-{{< command >}}
-$ awslocal fis create-experiment-template --cli-input-json file://create-experiment.json
-{{< /command >}}
+```bash
+awslocal fis create-experiment-template --cli-input-json file://create-experiment.json
+```
The following output would be retrieved:
@@ -132,24 +130,27 @@ The following output would be retrieved:
You can list all the templates you have created using the [`ListExperimentTemplates`](https://docs.aws.amazon.com/fis/latest/APIReference/API_ListExperimentTemplates.html):
-{{< command >}}
-$ awslocal fis list-experiment-templates
-{{< /command >}}
+```bash
+awslocal fis list-experiment-templates
+```
### Starting the experiment
Now let us start an EC2 instance that will match the criteria we specified in the experiment template.
-{{< command >}}
-$ awslocal ec2 run-instances --image-id ami-024f768332f0 --count 1 --tag-specifications '{"ResourceType": "instance", "Tags": [{"Key": "foo", "Value": "bar"}]}'
-{{< /command >}}
+```bash
+awslocal ec2 run-instances \
+ --image-id ami-024f768332f0 \
+ --count 1 \
+ --tag-specifications '{"ResourceType": "instance", "Tags": [{"Key": "foo", "Value": "bar"}]}'
+```
You can start the experiment using the [`StartExperiment`](https://docs.aws.amazon.com/fis/latest/APIReference/API_StartExperiment.html).
Run the following command and specify the ID of the experiment template you created earlier:
-{{< command >}}
-$ awslocal fis start-experiment --experiment-template-id ad16589a-4a91-4aee-88df-c33446605882
-{{< /command >}}
+```bash
+awslocal fis start-experiment --experiment-template-id ad16589a-4a91-4aee-88df-c33446605882
+```
The following output would be retrieved:
@@ -194,25 +195,28 @@ The following output would be retrieved:
You can use the [`ListExperiments`](https://docs.aws.amazon.com/fis/latest/APIReference/API_ListExperiments.html) to check the status of your experiment.
Run the following command:
-{{< command >}}
-$ awslocal fis list-experiments
-{{< /command >}}
+```bash
+awslocal fis list-experiments
+```
You can fetch the details of your experiment using the [`GetExperiment`](https://docs.aws.amazon.com/fis/latest/APIReference/API_GetExperiment.html) API.
Run the following command and specify the ID of the experiment you created earlier:
-{{< command >}}
-$ awslocal fis get-experiment --id efee7c02-8733-4d7c-9628-1b60bbec9759
-{{< /command >}}
+```bash
+awslocal fis get-experiment --id efee7c02-8733-4d7c-9628-1b60bbec9759
+```
### Verifying the outcome
You can now test that the experiment is working as expected by trying to obtain the state of the EC2 instance using [`DescribeInstanceStatus`](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeInstanceStatus.html).
Run the following command:
-{{< command >}}
-$ awslocal ec2 describe-instance-status --instance-ids i-3c40b52ab72f99c63 --output json --query InstanceStatuses[0].InstanceState
-{{< /command >}}
+```bash
+awslocal ec2 describe-instance-status \
+ --instance-ids i-3c40b52ab72f99c63 \
+ --output json \
+ --query InstanceStatuses[0].InstanceState
+```
If everything happened as expected, the following output would be retrieved:
diff --git a/src/content/docs/aws/services/glacier.md b/src/content/docs/aws/services/glacier.md
index 836f7ff7..c9051546 100644
--- a/src/content/docs/aws/services/glacier.md
+++ b/src/content/docs/aws/services/glacier.md
@@ -1,6 +1,5 @@
---
-title: "Glacier"
-linkTitle: "Glacier"
+title: Glacier
description: Get started with S3 Glacier on LocalStack
tags: ["Ultimate"]
persistence: supported
@@ -16,7 +15,7 @@ Glacier uses Jobs to retrieve the data in an Archive or list the inventory of a
LocalStack allows you to use the Glacier APIs in your local environment to manage Vaults and Archives.
You can use the Glacier API to configure and set up vaults where you can store archives and manage them.
-The supported APIs are available on our [API coverage page]({{< ref "coverage_glacier" >}}), which provides information on the extent of Glacier's integration with LocalStack.
+The supported APIs are available on our [API coverage page](), which provides information on the extent of Glacier's integration with LocalStack.
## Getting started
@@ -30,16 +29,16 @@ We will demonstrate how to create a vault, upload an archive, initiate a job to
You can create a vault using the [`CreateVault`](https://docs.aws.amazon.com/amazonglacier/latest/dev/api-vault-put.html) API.
Run the follow command to create a Glacier Vault named `sample-vault`.
-{{< command >}}
-$ awslocal glacier create-vault --vault-name sample-vault --account-id -
-{{< /command >}}
+```bash
+awslocal glacier create-vault --vault-name sample-vault --account-id -
+```
You can get the details from your vault using the [`DescribeVault`](https://docs.aws.amazon.com/amazonglacier/latest/dev/api-vault-get.html) API.
Run the following command to describe your vault.
-{{< command >}}
-$ awslocal glacier describe-vault --vault-name sample-vault --account-id -
-{{< /command >}}
+```bash
+awslocal glacier describe-vault --vault-name sample-vault --account-id -
+```
On successful creation of the Glacier vault, you will see the following output:
@@ -60,9 +59,9 @@ You can upload an archive or an individual file to a vault using the [`UploadArc
Download a random image from the internet and save it as `image.jpg`.
Run the following command to upload the file to your Glacier vault:
-{{< command >}}
-$ awslocal glacier upload-archive --vault-name sample-vault --account-id - --body image.jpg
-{{< /command >}}
+```bash
+awslocal glacier upload-archive --vault-name sample-vault --account-id - --body image.jpg
+```
On successful upload of the Glacier archive, you will see the following output:
@@ -79,9 +78,13 @@ On successful upload of the Glacier archive, you will see the following output:
You can initiate the retrieval of an archive from a vault using the [`InitiateJob`](https://docs.aws.amazon.com/amazonglacier/latest/dev/api-initiate-job-post.html) API.
To download an archive, you will need to initiate an `archive-retrieval` job first to make the Archive available for download.
-{{< command >}}
-$ awslocal glacier initiate-job --vault-name sample-vault --account-id - --job-parameters '{"Type":"archive-retrieval","ArchiveId":"d41d8cd98f00b204e9800998ecf8427e"}'
-{{< /command >}}
+
+```bash
+awslocal glacier initiate-job \
+ --vault-name sample-vault \
+ --account-id - \
+ --job-parameters '{"Type":"archive-retrieval","ArchiveId":"d41d8cd98f00b204e9800998ecf8427e"}'
+```
On successful execution of the job, you will see the following output:
@@ -96,9 +99,9 @@ On successful execution of the job, you will see the following output:
You can list the current and previous processes, called Jobs, to monitor the requests sent to the Glacier API using the [`ListJobs`](https://docs.aws.amazon.com/amazonglacier/latest/dev/api-jobs-get.html) API.
-{{< command >}}
-$ awslocal glacier list-jobs --vault-name sample-vault --account-id -
-{{< /command >}}
+```bash
+awslocal glacier list-jobs --vault-name sample-vault --account-id -
+```
On successful execution of the command, you will see the following output:
@@ -130,22 +133,30 @@ The data download process can be verified through the previous `ListJobs` call t
Once the `ArchiveRetrieval` Job is complete, the data can be downloaded.
You can use the `JobId` of the Job to download your archive with the following command:
-{{< command >}}
-$ awslocal glacier get-job-output --vault-name sample-vault --account-id - --job-id 25CEOTJ7ZUR5Q7YY0B1O55AE4C3L1502EOHWMNY10IIYEBWEQB73D23S8BVYO9RTRTPLRK2LJLUCCRM52GDV87C9A4JW my-archive.jpg
-{{< /command >}}
+```bash
+awslocal glacier get-job-output \
+ --vault-name sample-vault \
+ --account-id - \
+ --job-id 25CEOTJ7ZUR5Q7YY0B1O55AE4C3L1502EOHWMNY10IIYEBWEQB73D23S8BVYO9RTRTPLRK2LJLUCCRM52GDV87C9A4JW \
+ my-archive.jpg
+```
-{{< callout >}}
+:::danger
Please not that currently, this operation is only mocked, and will create an empty file named `my-archive.jpg`, not containing the contents of your archive.
-{{< /callout >}}
+:::
### Retrieve the inventory information
You can also initiate the retrieval of the inventory of a vault using the same [`InitiateJob`](https://docs.aws.amazon.com/amazonglacier/latest/dev/api-initiate-job-post.html) API.
Initiate a job of the specified type to get the details of the individual inventory items inside a Vault using the `initiate-job` command:
-{{< command >}}
-$ awslocal glacier initiate-job --vault-name sample-vault --account-id - --job-parameters '{"Type":"inventory-retrieval","ArchiveId":"d41d8cd98f00b204e9800998ecf8427e"}'
-{{< /command >}}
+
+```bash
+awslocal glacier initiate-job \
+ --vault-name sample-vault \
+ --account-id - \
+ --job-parameters '{"Type":"inventory-retrieval","ArchiveId":"d41d8cd98f00b204e9800998ecf8427e"}'
+```
On successful execution of the command, you will see the following output:
@@ -157,10 +168,14 @@ On successful execution of the command, you will see the following output:
```
In the same fashion as the archive retrieval, you can now download the result of the inventory retrieval job using `GetJobOutput` using the `JobId` from the result of the previous command:
-{{< command >}}
-$ awslocal glacier get-job-output \
- --vault-name sample-vault --account-id - --job-id P5972CSWFR803BHX48OD1A7JWNBFJUMYVWCMZWY55ZJPIJMG1XWFV9ISZPZH1X3LBF0UV3UG6ORETM0EHE5R86Z47B1F inventory.json
-{{< /command >}}
+
+```bash
+awslocal glacier get-job-output \
+ --vault-name sample-vault \
+ --account-id - \
+ --job-id P5972CSWFR803BHX48OD1A7JWNBFJUMYVWCMZWY55ZJPIJMG1XWFV9ISZPZH1X3LBF0UV3UG6ORETM0EHE5R86Z47B1F \
+ inventory.json
+```
Inspecting the content of the `inventory.json` file, we can find an inventory of the vault:
@@ -186,16 +201,21 @@ You can delete a Glacier archive using the [`DeleteArchive`](https://docs.aws.am
Run the following command to delete the previously created archive:
-{{< command >}}
-$ awslocal glacier delete-archive \
- --vault-name sample-vault --account-id - --archive-id d41d8cd98f00b204e9800998ecf8427e
-{{< /command >}}
+```bash
+awslocal glacier delete-archive \
+ --vault-name sample-vault \
+ --account-id - \
+ --archive-id d41d8cd98f00b204e9800998ecf8427e
+```
### Delete a vault
You can delete a Glacier vault with the [`DeleteVault`](https://docs.aws.amazon.com/amazonglacier/latest/dev/api-vault-delete.html) API.
Run the following command to delete the vault:
-{{< command >}}
-$ awslocal glacier delete-vault --vault-name sample-vault --account-id -
-{{< /command >}}
+
+```bash
+awslocal glacier delete-vault \
+ --vault-name sample-vault \
+ --account-id -
+```
diff --git a/src/content/docs/aws/services/glue.md b/src/content/docs/aws/services/glue.md
index 9b148398..f174f552 100644
--- a/src/content/docs/aws/services/glue.md
+++ b/src/content/docs/aws/services/glue.md
@@ -1,6 +1,5 @@
---
title: Glue
-linkTitle: Glue
description: Get started with Glue on LocalStack
tags: ["Ultimate"]
---
@@ -10,9 +9,9 @@ tags: ["Ultimate"]
The Glue API in LocalStack Pro allows you to run ETL (Extract-Transform-Load) jobs locally, maintaining table metadata in the local Glue data catalog, and using the Spark ecosystem (PySpark/Scala) to run data processing workflows.
LocalStack allows you to use the Glue APIs in your local environment.
-The supported APIs are available on our [API coverage page](/references/coverage/coverage_glue/), which provides information on the extent of Glue's integration with LocalStack.
+The supported APIs are available on our [API coverage page](), which provides information on the extent of Glue's integration with LocalStack.
-{{< callout >}}
+:::note
LocalStack now includes a container-based Glue Job executor, enabling Glue jobs to run within a Docker environment.
Previously, LocalStack relied on a pre-packaged binary that included Spark and other required components.
The new executor leverages the `aws-glue-libs` Docker image, provides better production parity, faster startup times, and more reliable execution.
@@ -27,7 +26,7 @@ Key enhancements include:
To use it, set `GLUE_JOB_EXECUTOR=docker` and `GLUE_JOB_EXECUTOR_PROVIDER=v2` in your LocalStack configuration.
The new executor additionally deprecates older versions of Glue (`0.9`, `1.0`, `2.0`).
-{{< /callout >}}
+:::
## Getting started
@@ -36,20 +35,20 @@ This guide is designed for users new to Glue and assumes basic knowledge of the
Start your LocalStack container using your preferred method.
We will demonstrate how to create databases and table metadata in Glue, run Glue ETL jobs, import databases from Athena, and run Glue Crawlers with the AWS CLI.
-{{< callout >}}
-In order to run Glue jobs, some additional dependencies have to be fetched from the network, including a Docker image of apprx.
-1.5GB which includes Spark, Presto, Hive and other tools.
+:::note
+In order to run Glue jobs, some additional dependencies have to be fetched from the network, including a Docker image of approximately 1.5GB which includes Spark, Presto, Hive and other tools.
These dependencies are automatically fetched when you start up the service, so please make sure you're on a decent internet connection when pulling the dependencies for the first time.
-{{< /callout >}}
+:::
### Creating Databases and Table Metadata
The commands below illustrate the creation of some very basic entries (databases, tables) in the Glue data catalog:
-{{< command >}}
-$ awslocal glue create-database --database-input '{"Name":"db1"}'
-$ awslocal glue create-table --database db1 --table-input '{"Name":"table1"}'
-$ awslocal glue get-tables --database db1
-{{< /command >}}
+
+```bash
+awslocal glue create-database --database-input '{"Name":"db1"}'
+awslocal glue create-table --database db1 --table-input '{"Name":"table1"}'
+awslocal glue get-tables --database db1
+```
You should see the following output:
@@ -87,27 +86,32 @@ if __name__ == '__main__':
```
You can now copy the script to an S3 bucket:
-{{< command >}}
-$ awslocal s3 mb s3://glue-test
-$ awslocal s3 cp job.py s3://glue-test/job.py
-{{< / command >}}
+
+```bash
+awslocal s3 mb s3://glue-test
+awslocal s3 cp job.py s3://glue-test/job.py
+```
Next, you can create a job definition:
-{{< command >}}
-$ awslocal glue create-job --name job1 --role arn:aws:iam::000000000000:role/glue-role \
- --command '{"Name": "pythonshell", "ScriptLocation": "s3://glue-test/job.py"}'
-{{< / command >}}
+```bash
+awslocal glue create-job \
+ --name job1 \
+ --role arn:aws:iam::000000000000:role/glue-role \
+ --command '{"Name": "pythonshell", "ScriptLocation": "s3://glue-test/job.py"}'
+```
You can finally start the job execution:
-{{< command >}}
-$ awslocal glue start-job-run --job-name job1
-{{< / command >}}
+```bash
+awslocal glue start-job-run --job-name job1
+```
+
The returned `JobRunId` can be used to query the status job the job execution, until it becomes `SUCCEEDED`:
-{{< command >}}
-$ awslocal glue get-job-run --job-name job1 --run-id
-{{< / command >}}
+
+```bash
+awslocal glue get-job-run --job-name job1 --run-id
+```
You should see the following output:
@@ -136,16 +140,17 @@ CREATE EXTERNAL TABLE db2.table2 (a1 Date, a2 STRING, a3 INT) LOCATION 's3://tes
```
Then this command will import these DB/table definitions into the Glue data catalog:
-{{< command >}}
-$ awslocal glue import-catalog-to-glue
-{{< /command >}}
+
+```bash
+awslocal glue import-catalog-to-glue
+```
Afterwards, the databases and tables will be available in Glue.
You can query the databases with the `get-databases` operation:
-{{< command >}}
-$ awslocal glue get-databases
-{{< /command >}}
+```bash
+awslocal glue get-databases
+```
You should see the following output:
@@ -166,9 +171,11 @@ You should see the following output:
```
And you can query the databases with the `get-databases` operation:
-{{< command >}}
-$ awslocal glue get-tables --database-name db2
-{{< / command >}}
+
+```bash
+awslocal glue get-tables --database-name db2
+```
+
You should see the following output:
```json
@@ -203,28 +210,33 @@ The example below illustrates crawling tables and partition metadata from S3 buc
You can first create an S3 bucket with a couple of items:
-{{< command >}}
-$ awslocal s3 mb s3://test
-$ printf "1, 2, 3, 4\n5, 6, 7, 8" > /tmp/file.csv
-$ awslocal s3 cp /tmp/file.csv s3://test/table1/year=2021/month=Jan/day=1/file.csv
-$ awslocal s3 cp /tmp/file.csv s3://test/table1/year=2021/month=Jan/day=2/file.csv
-$ awslocal s3 cp /tmp/file.csv s3://test/table1/year=2021/month=Feb/day=1/file.csv
-$ awslocal s3 cp /tmp/file.csv s3://test/table1/year=2021/month=Feb/day=2/file.csv
-{{< / command >}}
+```bash
+awslocal s3 mb s3://test
+printf "1, 2, 3, 4\n5, 6, 7, 8" > /tmp/file.csv
+awslocal s3 cp /tmp/file.csv s3://test/table1/year=2021/month=Jan/day=1/file.csv
+awslocal s3 cp /tmp/file.csv s3://test/table1/year=2021/month=Jan/day=2/file.csv
+awslocal s3 cp /tmp/file.csv s3://test/table1/year=2021/month=Feb/day=1/file.csv
+awslocal s3 cp /tmp/file.csv s3://test/table1/year=2021/month=Feb/day=2/file.csv
+```
You can then create and trigger the crawler:
-{{< command >}}
-$ awslocal glue create-database --database-input '{"Name":"db1"}'
-$ awslocal glue create-crawler --name c1 --database-name db1 --role arn:aws:iam::000000000000:role/glue-role --targets '{"S3Targets": [{"Path": "s3://test/table1"}]}'
-$ awslocal glue start-crawler --name c1
-{{< / command >}}
+```bash
+awslocal glue create-database --database-input '{"Name":"db1"}'
+awslocal glue create-crawler \
+ --name c1 \
+ --database-name db1 \
+ --role arn:aws:iam::000000000000:role/glue-role \
+ --targets '{"S3Targets": [{"Path": "s3://test/table1"}]}'
+awslocal glue start-crawler --name c1
+```
Finally, you can query the table metadata that has been created by the crawler:
-{{< command >}}
-$ awslocal glue get-tables --database-name db1
-{{< / command >}}
+```bash
+awslocal glue get-tables --database-name db1
+```
+
You should see the following output:
```json
@@ -237,9 +249,11 @@ You should see the following output:
```
You can also query the created table partitions:
-{{< command >}}
-$ awslocal glue get-partitions --database-name db1 --table-name table1
-{{< / command >}}
+
+```bash
+awslocal glue get-partitions --database-name db1 --table-name table1
+```
+
You should see the following output:
```json
@@ -257,9 +271,16 @@ When using JDBC crawlers, you can point your crawler towards a Redshift database
Below is a rough outline of the steps required to get the integration for the JDBC crawler working.
You can first create the local Redshift cluster via:
-{{< command >}}
-$ awslocal redshift create-cluster --cluster-identifier c1 --node-type dc1.large --master-username test --master-user-password test --db-name db1
-{{< / command >}}
+
+```bash
+awslocal redshift create-cluster \
+ --cluster-identifier c1 \
+ --node-type dc1.large \
+ --master-username test \
+ --master-user-password test \
+ --db-name db1
+```
+
The output of this command contains the endpoint address of the created Redshift database:
```json
@@ -275,18 +296,23 @@ Then you can use any JDBC or Postgres client to create a table `mytable1` in the
Next, you're creating the Glue database, the JDBC connection, as well as the crawler:
-{{< command >}}
-$ awslocal glue create-database --database-input '{"Name":"gluedb1"}'
-$ awslocal glue create-connection --connection-input \
+```bash
+awslocal glue create-database --database-input '{"Name":"gluedb1"}'
+awslocal glue create-connection --connection-input \
{"Name":"conn1","ConnectionType":"JDBC","ConnectionProperties":{"USERNAME":"test","PASSWORD":"test","JDBC_CONNECTION_URL":"jdbc:redshift://localhost.localstack.cloud:4510/db1"}}'
-$ awslocal glue create-crawler --name c1 --database-name gluedb1 --role arn:aws:iam::000000000000:role/glue-role --targets '{"JdbcTargets":[{"ConnectionName":"conn1","Path":"db1/%/mytable1"}]}'
-$ awslocal glue start-crawler --name c1
-{{< / command >}}
+awslocal glue create-crawler \
+ --name c1 \
+ --database-name gluedb1 \
+ --role arn:aws:iam::000000000000:role/glue-role \
+ --targets '{"JdbcTargets":[{"ConnectionName":"conn1","Path":"db1/%/mytable1"}]}'
+awslocal glue start-crawler --name c1
+```
Once the crawler has started, you have to wait until the `State` turns to `READY` when querying the current state:
-{{< command >}}
-$ awslocal glue get-crawler --name c1
-{{< /command >}}
+
+```bash
+awslocal glue get-crawler --name c1
+```
Once the crawler has finished running and is back in `READY` state, the Glue table within the `gluedb1` DB should have been populated and can be queried via the API.
@@ -296,21 +322,27 @@ The Glue Schema Registry allows you to centrally discover, control, and evolve d
With the Schema Registry, you can manage and enforce schemas and schema compatibilities in your streaming applications.
It integrates nicely with [Managed Streaming for Kafka (MSK)](../managed-streaming-for-kafka).
-{{< callout >}}
+:::note
Currently, LocalStack supports the AVRO dataformat for the Glue Schema Registry.
Support for other dataformats will be added in the future.
-{{< /callout >}}
+:::
You can create a schema registry with the following command:
-{{< command >}}
-$ awslocal glue create-registry --registry-name demo-registry
-{{< /command >}}
+
+```bash
+awslocal glue create-registry --registry-name demo-registry
+```
You can create a schema in the newly created registry with the `create-schema` command:
-{{< command >}}
-$ awslocal glue create-schema --schema-name demo-schema --registry-id RegistryName=demo-registry --data-format AVRO --compatibility FORWARD \
- --schema-definition '{"type":"record","namespace":"Demo","name":"Person","fields":[{"name":"Name","type":"string"}]}'
-{{< /command >}}
+
+```bash
+awslocal glue create-schema --schema-name demo-schema \
+ --registry-id RegistryName=demo-registry \
+ --data-format AVRO \
+ --compatibility FORWARD \
+ --schema-definition '{"type":"record","namespace":"Demo","name":"Person","fields":[{"name":"Name","type":"string"}]}'
+```
+
You should see the following output:
```json
@@ -331,10 +363,12 @@ You should see the following output:
```
Once the schema has been created, you can create a new version:
-{{< command >}}
-$ awslocal glue register-schema-version --schema-id SchemaName=demo-schema,RegistryName=demo-registry \
- --schema-definition '{"type":"record","namespace":"Demo","name":"Person","fields":[{"name":"Name","type":"string"}, {"name":"Address","type":"string"}]}'
-{{< /command >}}
+
+```bash
+awslocal glue register-schema-version \
+ --schema-id SchemaName=demo-schema,RegistryName=demo-registry \
+ --schema-definition '{"type":"record","namespace":"Demo","name":"Person","fields":[{"name":"Name","type":"string"}, {"name":"Address","type":"string"}]}'
+```
You should see the following output:
@@ -352,9 +386,9 @@ You can find a more advanced sample in our [localstack-pro-samples repository on
LocalStack Glue supports [Delta Lake](https://delta.io), an open-source storage framework that extends Parquet data files with a file-based transaction log for ACID transactions and scalable metadata handling.
-{{< callout >}}
+:::note
Please note that Delta Lake tables are only [supported for Glue versions `3.0` and `4.0`](https://docs.aws.amazon.com/glue/latest/dg/aws-glue-programming-etl-format-delta-lake.html).
-{{< /callout >}}
+:::
To illustrate this feature, we take a closer look at a Glue sample job that creates a Delta Lake table, puts some data into it, and then queries data from the table.
@@ -390,18 +424,16 @@ print("SQL result:", result.toJSON().collect())
You can now run the following commands to create and start the Glue job:
-{{< command >}}
-$ awslocal s3 mb s3://test
-$ awslocal s3 cp job.py s3://test/job.py
-$ awslocal glue create-job --name job1 --role arn:aws:iam::000000000000:role/test \
- --glue-version 4.0 --command '{"Name": "pythonshell", "ScriptLocation": "s3://test/job.py"}'
-$ awslocal glue start-job-run --job-name job1
-
-{
- "JobRunId": "c9471f40"
-}
-
-{{< / command >}}
+```bash
+awslocal s3 mb s3://test
+awslocal s3 cp job.py s3://test/job.py
+awslocal glue create-job --name job1 --role arn:aws:iam::000000000000:role/test \
+ --glue-version 4.0 \
+ --command '{"Name": "pythonshell", "ScriptLocation": "s3://test/job.py"}'
+awslocal glue start-job-run --job-name job1
+```
+
+Retrieve the job run ID from the output of the `start-job-run` command.
The execution of the Glue job can take a few moments - once the job has finished executing, you should see a log line with the query results in the LocalStack container logs, similar to the output below:
@@ -411,20 +443,20 @@ SQL result: ['{"name":"test1","key":123}', '{"name":"test2","key":456}']
```
In order to see the logs above, make sure to enable `DEBUG=1` in the LocalStack container environment.
-Alternatively, you can also retrieve the job logs programmatically via the CloudWatch Logs API - for example, using the job run ID `c9471f40` from above:
-{{< command >}}
-$ awslocal logs get-log-events --log-group-name /aws-glue/jobs/logs-v2 --log-stream-name c9471f40
-
-{ "events": [ ... ] }
-
-{{< / command >}}
+Alternatively, you can also retrieve the job logs programmatically via the CloudWatch Logs API - for example, using the job run ID from the above command.
+
+```bash
+awslocal logs get-log-events \
+ --log-group-name /aws-glue/jobs/logs-v2 \
+ --log-stream-name
+```
## Resource Browser
The LocalStack Web Application provides a Resource Browser for Glue.
You can access the Resource Browser by opening the LocalStack Web Application in your browser, navigating to the **Resources** section, and then clicking on **Glue** under the **Analytics** section.
-
+
The Resource Browser allows you to perform the following actions:
@@ -438,12 +470,6 @@ The Resource Browser allows you to perform the following actions:
## Examples
-The following Developer Hub applications are using Glue:
-{{< applications service_filter="glu">}}
-
-The following tutorials are using Glue:
-{{< tutorials "/tutorials/schema-evolution-glue-msk">}}
-
The following code snippets and sample applications provide practical examples of how to use Glue in LocalStack for various use cases:
- [localstack-pro-samples/glue-etl-jobs](https://github.com/localstack/localstack-pro-samples/tree/master/glue-etl-jobs)
diff --git a/src/content/docs/aws/services/iam.md b/src/content/docs/aws/services/iam.md
index a209b42f..88e29df5 100644
--- a/src/content/docs/aws/services/iam.md
+++ b/src/content/docs/aws/services/iam.md
@@ -1,6 +1,5 @@
---
title: "Identity and Access Management (IAM)"
-linkTitle: "Identity and Access Management (IAM)"
description: Get started with AWS Identity and Access Management (IAM) on LocalStack
persistence: supported
tags: ["Free"]
@@ -13,8 +12,8 @@ IAM allows organizations to create and manage AWS users, groups, and roles, defi
By centralizing access control, administrators can enforce the principle of least privilege, ensuring users have only the necessary permissions for their tasks.
LocalStack allows you to use the IAM APIs in your local environment to create and manage users, groups, and roles, granting permissions that adhere to the principle of least privilege.
-The supported APIs are available on our [API coverage page]({{< ref "references/coverage/coverage_iam" >}}), which provides information on the extent of IAM's integration with LocalStack.
-The policy coverage is documented in the [IAM coverage documentation]({{< ref "iam-coverage" >}}).
+The supported APIs are available on our [API coverage page](), which provides information on the extent of IAM's integration with LocalStack.
+The policy coverage is documented in the [IAM coverage documentation]().
## Getting started
@@ -26,9 +25,9 @@ We will demonstrate how you can create a new user named `test`, create an access
By default, in the absence of custom credentials configuration, all requests to LocalStack run under the administrative root user.
Run the following command to use the [`GetCallerIdentity`](https://docs.aws.amazon.com/cli/latest/reference/sts/get-caller-identity.html) API to confirm that the request is running under the root user:
-{{< command >}}
-$ awslocal sts get-caller-identity
-{{< / command >}}
+```bash
+awslocal sts get-caller-identity
+```
You can see an output similar to the following:
@@ -43,16 +42,16 @@ You can see an output similar to the following:
You can now create a new user named `test` using the [`CreateUser`](https://docs.aws.amazon.com/cli/latest/reference/iam/create-user.html) API.
Run the following command:
-{{< command >}}
-$ awslocal iam create-user --user-name test
-{{< / command >}}
+```bash
+awslocal iam create-user --user-name test
+```
You can now create an access key pair for the user using the [`CreateAccessKey`](https://docs.aws.amazon.com/cli/latest/reference/iam/create-access-key.html) API.
Run the following command:
-{{< command >}}
-$ awslocal iam create-access-key --user-name test
-{{< / command >}}
+```bash
+awslocal iam create-access-key --user-name test
+```
You can see an output similar to the following:
@@ -72,15 +71,20 @@ You can see an output similar to the following:
You can save the `AccessKeyId` and `SecretAccessKey` values, and export them in the environment to run commands under the `test` user.
Run the following command:
-{{< command >}}
-$ export AWS_ACCESS_KEY_ID=LKIAQAAAAAAAGFWKCM5F AWS_SECRET_ACCESS_KEY=DUulXk2N2yD6rgoBBR9A/5iXa6dBcLyDknr925Q5
-$ awslocal sts get-caller-identity
+```bash
+export AWS_ACCESS_KEY_ID=LKIAQAAAAAAAGFWKCM5F AWS_SECRET_ACCESS_KEY=DUulXk2N2yD6rgoBBR9A/5iXa6dBcLyDknr925Q5
+awslocal sts get-caller-identity
+```
+
+You can see an output similar to the following:
+
+```bash
{
"UserId": "b2yxf5g824zklfx5ry8o",
"Account": "000000000000",
"Arn": "arn:aws:iam::000000000000:user/test"
}
-{{< / command >}}
+```
You can see that the request is now running under the `test` user.
@@ -89,7 +93,7 @@ You can see that the request is now running under the `test` user.
The LocalStack Web Application provides a Resource Browser for managing IAM users, groups, and roles.
You can access the Resource Browser by opening the LocalStack Web Application in your browser, navigating to the **Resources** section, and then clicking on **IAM** under the **Security Identity Compliance** section.
-
+
The Resource Browser allows you to perform the following actions:
@@ -103,11 +107,11 @@ The Resource Browser allows you to perform the following actions:
LocalStack provides various tools to help you generate, test, and enforce IAM policies more efficiently.
- **IAM Policy Stream**: IAM Policy Stream provides a real-time view of API calls and the corresponding IAM policies they generate, simplifying permission management and ensuring correct permissions are assigned.
- Learn more in the [IAM Policy Stream documentation]({{< ref "user-guide/security-testing/iam-policy-stream" >}}).
+ Learn more in the [IAM Policy Stream documentation](/aws/capabilities/security-testing/iam-policy-stream).
- **IAM Policy Enforcement**: This configuration enforces IAM policies when interacting with local cloud APIs, simulating a real AWS environment.
- For additional information, refer to the [IAM Policy Enforcement documentation]({{< ref "iam-enforcement" >}}).
+ For additional information, refer to the [IAM Policy Enforcement documentation](/aws/capabilities/security-testing/iam-policy-enforcement).
- **Explainable IAM**: Explainable IAM logs outputs related to failed policy evaluations directly to LocalStack logs, aiding in the identification of necessary policies for successful requests.
- More details are available in the [Explainable IAM documentation]({{< ref "explainable-iam" >}}).
+ More details are available in the [Explainable IAM documentation](/aws/capabilities/security-testing/explainable-iam).
## Examples
diff --git a/src/content/docs/aws/services/identitystore.md b/src/content/docs/aws/services/identitystore.md
index a5503799..781077d8 100644
--- a/src/content/docs/aws/services/identitystore.md
+++ b/src/content/docs/aws/services/identitystore.md
@@ -1,6 +1,5 @@
---
title: "Identity Store"
-linkTitle: "Identity Store"
description: Get started with Identity Store on LocalStack
tags: ["Ultimate"]
---
@@ -11,7 +10,7 @@ Identity Store is a managed service that enables the creation and management of
Groups are used to manage access to AWS resources, and Identity Store provides a central location to create and manage groups across your AWS accounts.
LocalStack allows you to use the Identity Store APIs to create and manage groups in your local environment.
-The supported APIs are available on our [API Coverage Page]({{< ref "coverage_identitystore" >}}), which provides information on the extent of Identity Store integration with LocalStack.
+The supported APIs are available on our [API Coverage Page](), which provides information on the extent of Identity Store integration with LocalStack.
## Getting started
@@ -26,15 +25,18 @@ This guide will demonstrate how to create a group within Identity Store, list al
You can create a new group in the Identity Store using the [`CreateGroup`](https://docs.aws.amazon.com/singlesignon/latest/IdentityStoreAPIReference/API_CreateGroup.html) API.
Execute the following command to create a group with an identity store ID of `testls`:
-{{< command >}}
-$ awslocal identitystore create-group --identity-store-id testls
-
+```bash
+awslocal identitystore create-group --identity-store-id testls
+```
+
+You can see an output similar to the following:
+
+```bash
{
"GroupId": "38cec731-de22-45bf-9af7-b74457bba884",
"IdentityStoreId": "testls"
}
-
-{{< / command >}}
+```
Copy the `GroupId` value from the output, as it will be needed in subsequent steps.
@@ -43,9 +45,13 @@ Copy the `GroupId` value from the output, as it will be needed in subsequent ste
After creating groups, you might want to list all groups within the Identity Store to manage or review them.
Run the following command to list all groups using the [`ListGroups`](https://docs.aws.amazon.com/singlesignon/latest/IdentityStoreAPIReference/API_ListGroups.html) API:
-{{< command >}}
-$ awslocal identitystore list-groups --identity-store-id testls
-
+```bash
+awslocal identitystore list-groups --identity-store-id testls
+```
+
+You can see an output similar to the following:
+
+```bash
{
"Groups": [
{
@@ -55,8 +61,7 @@ $ awslocal identitystore list-groups --identity-store-id testls
}
]
}
-
-{{< / command >}}
+```
This command returns a list of all groups, including the group you created in the previous step.
@@ -65,15 +70,18 @@ This command returns a list of all groups, including the group you created in th
To view details about a specific group, use the [`DescribeGroup`](https://docs.aws.amazon.com/singlesignon/latest/IdentityStoreAPIReference/API_DescribeGroup.html) API.
Run the following command to describe the group you created in the previous step:
-{{< command >}}
-$ awslocal describe-group --identity-store-id testls --group-id 38cec731-de22-45bf-9af7-b74457bba884
-
+```bash
+awslocal describe-group --identity-store-id testls --group-id 38cec731-de22-45bf-9af7-b74457bba884
+```
+
+You can see an output similar to the following:
+
+```bash
{
"GroupId": "38cec731-de22-45bf-9af7-b74457bba884",
"ExternalIds": [],
"IdentityStoreId": "testls"
}
-
-{{< / command >}}
+```
This command provides detailed information about the specific group, including its ID and any external IDs associated with it.
diff --git a/src/content/docs/aws/services/iot.md b/src/content/docs/aws/services/iot.md
index 49c1c789..a8f1169f 100644
--- a/src/content/docs/aws/services/iot.md
+++ b/src/content/docs/aws/services/iot.md
@@ -1,9 +1,7 @@
---
title: "IoT"
-linkTitle: "IoT"
tags: ["Base"]
-description: >
- Get started with AWS IoT on LocalStack
+description: Get started with AWS IoT on LocalStack
---
## Introduction
@@ -12,7 +10,7 @@ AWS IoT provides cloud services to manage IoT devices and integrate them with ot
LocalStack supports IoT Core, IoT Data, IoT Analytics.
Common operations for creating and updating things, groups, policies, certificates and other entities are implemented with full CloudFormation support.
-The supported APIs are available on our [API coverage page]({{< ref "coverage_iot" >}}).
+The supported APIs are available on our [API coverage page]().
LocalStack ships a [Message Queuing Telemetry Transport (MQTT)](https://mqtt.org/) broker powered by [Eclipse Mosquitto](https://mosquitto.org/) which supports both pure MQTT and MQTT-over-WSS (WebSockets Secure) protocols.
@@ -24,42 +22,45 @@ Start LocalStack using your preferred method.
To retrieve the MQTT endpoint, use the [`DescribeEndpoint`](https://docs.aws.amazon.com/iot/latest/apireference/API_DescribeEndpoint.html) operation.
-{{< command >}}
-$ awslocal iot describe-endpoint
-
+```bash
+awslocal iot describe-endpoint
+```
+
+You can see an output similar to the following:
+
+```bash
{
"endpointAddress": "000000000000.iot.eu-central-1.localhost.localstack.cloud:4510"
}
-
-{{< / command >}}
+```
-{{< callout "tip" >}}
+:::note
LocalStack lazy-loads services by default.
The MQTT broker may not be automatically available on a fresh launch of LocalStack.
You can make a `DescribeEndpoint` call to start the broker and identify the port.
-{{< /callout >}}
+:::
This endpoint can then be used with any MQTT client to publish and subscribe to topics.
In this example, we will use the [Hive MQTT CLI](https://hivemq.github.io/mqtt-cli/docs/installation/).
Run the following command to subscribe to an MQTT topic.
-{{< command >}}
-$ mqtt subscribe \
+```bash
+mqtt subscribe \
--host 000000000000.iot.eu-central-1.localhost.localstack.cloud \
--port 4510 \
--topic climate
-{{< /command >}}
+```
In a separate terminal session, publish a message to this topic.
-{{< command >}}
-$ mqtt publish \
+```bash
+mqtt publish \
--host 000000000000.iot.eu-central-1.localhost.localstack.cloud \
--port 4510 \
--topic climate \
-m "temperature=30°C;humidity=60%"
-{{< /command >}}
+```
This message will be pushed to all subscribers of this topic, including the one in the first terminal session.
@@ -68,10 +69,10 @@ This message will be pushed to all subscribers of this topic, including the one
LocalStack IoT maintains its own root certificate authority which is regenerated at every run.
The root CA certificate can be retrieved from .
-{{< callout "tip" >}}
+:::note
AWS provides its root CA certificate at .
[This section](https://docs.aws.amazon.com/iot/latest/developerguide/server-authentication.html#server-authentication-certs) contains information about CA certificates.
-{{< /callout >}}
+:::
When connecting to the endpoints, you will need to provide this root CA certificate for authentication.
This is illustrated below with Python [AWS IoT SDK](https://docs.aws.amazon.com/iot/latest/developerguide/iot-sdks.html),
@@ -168,10 +169,10 @@ Currently the `principalIdentifier` and `sessionIdentifier` fields in event payl
LocalStack can publish the [registry events](https://docs.aws.amazon.com/iot/latest/developerguide/registry-events.html), if [you enable it](https://docs.aws.amazon.com/iot/latest/developerguide/iot-events.html#iot-events-enable).
-{{< command >}}
-$ awslocal iot update-event-configurations \
- --event-configurations '{"THING":{"Enabled": true}}'
-{{< / command >}}
+```bash
+awslocal iot update-event-configurations \
+ --event-configurations '{"THING":{"Enabled": true}}'
+```
You can then subscribe or use topic rules on the follow topics:
diff --git a/src/content/docs/aws/services/iotanalytics.md b/src/content/docs/aws/services/iotanalytics.md
index 51b77372..766428f5 100644
--- a/src/content/docs/aws/services/iotanalytics.md
+++ b/src/content/docs/aws/services/iotanalytics.md
@@ -1,14 +1,13 @@
---
title: "IoT Analytics"
-linkTitle: "IoT Analytics"
tags: ["Ultimate"]
description: Get started with IoT Analytics on LocalStack
---
-{{< callout "warning" >}}
+:::danger
IoT Analytics will be [retired on 15 December 2025](https://docs.aws.amazon.com/iotanalytics/latest/userguide/iotanalytics-end-of-support.html).
It will be removed from LocalStack soon after this date.
-{{< /callout >}}
+:::
## Introduction
@@ -16,7 +15,7 @@ IoT Analytics is a managed service that enables you to collect, store, process,
It provides a set of tools to build IoT applications without having to manage the underlying infrastructure.
LocalStack allows you to use the IoT Analytics APIs to create and manage channels, data stores, and pipelines in your local environment.
-The supported APIs are available on our [API Coverage Page]({{< ref "coverage_iotanalytics" >}}), which provides information on the extent of IoT Analytics integration with LocalStack.
+The supported APIs are available on our [API Coverage Page](), which provides information on the extent of IoT Analytics integration with LocalStack.
## Getting started
@@ -30,15 +29,15 @@ We will demonstrate how to create a channel, data store, and pipeline within IoT
You can create a channel using the [`CreateChannel`](https://docs.aws.amazon.com/iotanalytics/latest/APIReference/API_CreateChannel.html) API.
Run the following command to create a channel named `mychannel`:
-{{< command >}}
-$ awslocal iotanalytics create-channel --channel-name mychannel
-{{< /command >}}
+```bash
+awslocal iotanalytics create-channel --channel-name mychannel
+```
You can use the [`DescribeChannel`](https://docs.aws.amazon.com/iotanalytics/latest/APIReference/API_DescribeChannel.html) API to check the status of the channel:
-{{< command >}}
-$ awslocal iotanalytics describe-channel --channel-name mychannel
-{{< /command >}}
+```bash
+awslocal iotanalytics describe-channel --channel-name mychannel
+```
The following output is displayed:
@@ -56,15 +55,15 @@ The following output is displayed:
You can create a data store using the [`CreateDatastore`](https://docs.aws.amazon.com/iotanalytics/latest/APIReference/API_CreateDatastore.html) API.
Run the following command to create a data store named `mydatastore`:
-{{< command >}}
-$ awslocal iotanalytics create-datastore --datastore-name mydatastore
-{{< /command >}}
+```bash
+awslocal iotanalytics create-datastore --datastore-name mydatastore
+```
You can use the [`DescribeDatastore`](https://docs.aws.amazon.com/iotanalytics/latest/APIReference/API_DescribeDatastore.html) API to check the status of the data store:
-{{< command >}}
-$ awslocal iotanalytics describe-datastore --datastore-name mydatastore
-{{< /command >}}
+```bash
+awslocal iotanalytics describe-datastore --datastore-name mydatastore
+```
The following output is displayed:
@@ -82,9 +81,9 @@ The following output is displayed:
You can create a pipeline using the [`CreatePipeline`](https://docs.aws.amazon.com/iotanalytics/latest/APIReference/API_CreatePipeline.html) API.
Run the following command to create a pipeline named `mypipeline`:
-{{< command >}}
-$ awslocal iotanalytics create-pipeline --cli-input-json file://mypipeline.json
-{{< /command >}}
+```bash
+awslocal iotanalytics create-pipeline --cli-input-json file://mypipeline.json
+```
The `mypipeline.json` file contains the following content:
@@ -111,9 +110,9 @@ The `mypipeline.json` file contains the following content:
You can use the [`DescribePipeline`](https://docs.aws.amazon.com/iotanalytics/latest/APIReference/API_DescribePipeline.html) API to check the status of the pipeline:
-{{< command >}}
-$ awslocal iotanalytics describe-pipeline --pipeline-name mypipeline
-{{< /command >}}
+```bash
+awslocal iotanalytics describe-pipeline --pipeline-name mypipeline
+```
The following output is displayed:
diff --git a/src/content/docs/aws/services/iotdata.md b/src/content/docs/aws/services/iotdata.md
index 175d8d04..4d547337 100644
--- a/src/content/docs/aws/services/iotdata.md
+++ b/src/content/docs/aws/services/iotdata.md
@@ -1,6 +1,5 @@
---
title: "IoT Data"
-linkTitle: "IoT Data"
tags: ["Ultimate"]
description: Get started with IoT Data on LocalStack
---
@@ -11,7 +10,7 @@ IoT Data provides secure, bi-directional communication between Internet-connecte
It allows you to connect your devices to the cloud and interact with them using the AWS Management Console, AWS CLI, or AWS SDKs.
LocalStack allows you to use the IoT Data APIs to update, get, and delete the shadow of a thing in your local environment.
-The supported APIs are available on our [API Coverage Page]({{< ref "coverage_iot-data" >}}), which provides information on the extent of IoT Data integration with LocalStack.
+The supported APIs are available on our [API Coverage Page](), which provides information on the extent of IoT Data integration with LocalStack.
## Getting started
@@ -25,12 +24,12 @@ We will demonstrate how to create a thing, update its shadow, get its shadow, an
You can update the shadow of a thing using the [`UpdateThingShadow`](https://docs.aws.amazon.com/iot/latest/apireference/API_UpdateThingShadow.html) API.
Run the following command to update the shadow of a thing named `MyRPi`:
-{{< command >}}
-$ awslocal iot-data update-thing-shadow \
+```bash
+awslocal iot-data update-thing-shadow \
--thing-name "MyRPi" \
--payload "{\"state\":{\"reported\":{\"moisture\":\"okay\"}}}" \
output.txt --cli-binary-format raw-in-base64-out
-{{< /command >}}
+```
The `output.txt` file contains the following output:
@@ -58,11 +57,11 @@ The `output.txt` file contains the following output:
You can get the shadow of a thing using the [`GetThingShadow`](https://docs.aws.amazon.com/iot/latest/apireference/API_GetThingShadow.html) API.
Run the following command to get the shadow:
-{{< command >}}
-$ awslocal iot-data get-thing-shadow \
+```bash
+awslocal iot-data get-thing-shadow \
--thing-name "MyRPi" \
output.txt
-{{< /command >}}
+```
The `output.txt` will contain the same output as the previous command.
@@ -71,11 +70,11 @@ The `output.txt` will contain the same output as the previous command.
You can delete the shadow of a thing using the [`DeleteThingShadow`](https://docs.aws.amazon.com/iot/latest/apireference/API_DeleteThingShadow.html) API.
Run the following command to delete the shadow:
-{{< command >}}
-$ awslocal iot-data delete-thing-shadow \
+```bash
+awslocal iot-data delete-thing-shadow \
--thing-name "MyRPi" \
output.txt
-{{< /command >}}
+```
The `output.txt` will contain the following output:
diff --git a/src/content/docs/aws/services/iotwireless.md b/src/content/docs/aws/services/iotwireless.md
index ccae3523..36074e56 100644
--- a/src/content/docs/aws/services/iotwireless.md
+++ b/src/content/docs/aws/services/iotwireless.md
@@ -1,6 +1,5 @@
---
title: "IoT Wireless"
-linkTitle: "IoT Wireless"
description: Get started with IoT Wireless on LocalStack
tags: ["Ultimate"]
---
@@ -11,7 +10,7 @@ AWS IoT Wireless is a managed service that enables customers to connect and mana
The service provides a set of APIs to manage wireless devices, gateways, and destinations.
LocalStack allows you to use the IoT Wireless APIs in your local environment from creating wireless devices and gateways.
-The supported APIs are available on our [API coverage page]({{< ref "coverage_iotwireless" >}}), which provides information on the extent of IoT Wireless's integration with LocalStack.
+The supported APIs are available on our [API coverage page](), which provides information on the extent of IoT Wireless's integration with LocalStack.
## Getting started
@@ -25,9 +24,9 @@ We will demonstrate how to use IoT Wireless to create wireless devices and gatew
You can create a wireless device using the [`CreateWirelessDevice`](https://docs.aws.amazon.com/iot-wireless/2020-11-22/API_CreateWirelessDevice.html) API.
Run the following command to create a wireless device:
-{{< command >}}
-$ awslocal iotwireless create-device-profile
-{{< / command >}}
+```bash
+awslocal iotwireless create-device-profile
+```
The following output would be retrieved:
@@ -40,9 +39,9 @@ The following output would be retrieved:
You can list the device profiles using the [`ListDeviceProfiles`](https://docs.aws.amazon.com/iot-wireless/2020-11-22/API_ListDeviceProfiles.html) API.
Run the following command to list the device profiles:
-{{< command >}}
-$ awslocal iotwireless list-device-profiles
-{{< / command >}}
+```bash
+awslocal iotwireless list-device-profiles
+```
The following output would be retrieved:
@@ -61,10 +60,10 @@ The following output would be retrieved:
You can create a wireless device using the [`CreateWirelessDevice`](https://docs.aws.amazon.com/iot-wireless/2020-11-22/API_CreateWirelessDevice.html) API.
Run the following command to create a wireless device:
-{{< command >}}
-$ awslocal iotwireless create-wireless-device \
+```bash
+awslocal iotwireless create-wireless-device \
--cli-input-json file://input.json
-{{< / command >}}
+```
The `input.json` file contains the following content:
@@ -90,9 +89,9 @@ The `input.json` file contains the following content:
You can list the wireless devices using the [`ListWirelessDevices`](https://docs.aws.amazon.com/iot-wireless/2020-11-22/API_ListWirelessDevices.html) API.
Run the following command to list the wireless devices:
-{{< command >}}
-$ awslocal iotwireless list-wireless-devices
-{{< / command >}}
+```bash
+awslocal iotwireless list-wireless-devices
+```
The following output would be retrieved:
@@ -117,12 +116,12 @@ The following output would be retrieved:
You can create a wireless gateway using the [`CreateWirelessGateway`](https://docs.aws.amazon.com/iot-wireless/2020-11-22/API_CreateWirelessGateway.html) API.
Run the following command to create a wireless gateway:
-{{< command >}}
-$ awslocal iotwireless create-wireless-gateway \
+```bash
+awslocal iotwireless create-wireless-gateway \
--lorawan GatewayEui="a1b2c3d4567890ab",RfRegion="US915" \
--name "myFirstLoRaWANGateway" \
--description "Using my first LoRaWAN gateway"
-{{< / command >}}
+```
The following output would be retrieved:
@@ -135,9 +134,9 @@ The following output would be retrieved:
You can list the wireless gateways using the [`ListWirelessGateways`](https://docs.aws.amazon.com/iot-wireless/2020-11-22/API_ListWirelessGateways.html) API.
Run the following command to list the wireless gateways:
-{{< command >}}
-$ awslocal iotwireless list-wireless-gateways
-{{< / command >}}
+```bash
+awslocal iotwireless list-wireless-gateways
+```
The following output would be retrieved:
diff --git a/src/content/docs/aws/services/kinesis.md b/src/content/docs/aws/services/kinesis.md
index d3725daf..ddb55012 100644
--- a/src/content/docs/aws/services/kinesis.md
+++ b/src/content/docs/aws/services/kinesis.md
@@ -1,6 +1,5 @@
---
title: "Kinesis Data Streams"
-linkTitle: "Kinesis Data Streams"
description: Get started with Kinesis Data Streams on LocalStack
persistence: supported
tags: ["Free"]
@@ -12,7 +11,7 @@ Kinesis Data Streams is an AWS service for ingesting, buffering, and processing
It is used for applications that require real-time processing and deriving insights from data streams such as logs, metrics, user interactions, and sensor readings.
LocalStack allows you to use the Kinesis Data Streams APIs in your local environment from setting up data streams and configuring data processing to building real-time applications.
-The supported APIs are available on our [API coverage page]({{< ref "coverage_kinesis" >}}).
+The supported APIs are available on our [API coverage page]().
Emulation for Kinesis is powered by [Kinesis Mock](https://github.com/etspaceman/kinesis-mock).
@@ -42,15 +41,15 @@ export const handler = (event, context) => {
You can create a Lambda function using the [`CreateFunction`](https://docs.aws.amazon.com/lambda/latest/dg/API_CreateFunction.html) API.
Run the following command to create a Lambda function named `ProcessKinesisRecords`:
-{{< command >}}
-$ zip function.zip index.mjs
-$ awslocal lambda create-function \
+```bash
+zip function.zip index.mjs
+awslocal lambda create-function \
--function-name ProcessKinesisRecords \
--zip-file fileb://function.zip \
--handler index.handler \
--runtime nodejs18.x \
--role arn:aws:iam::000000000000:role/lambda-kinesis-role
-{{< / command >}}
+```
The following output would be retrieved:
@@ -96,30 +95,30 @@ The JSON contains a sample Kinesis event.
You can use the [`Invoke`](https://docs.aws.amazon.com/lambda/latest/dg/API_Invoke.html) API to invoke the Lambda function with the Kinesis event as input.
Execute the following command:
-{{< command >}}
-$ awslocal lambda invoke \
+```bash
+awslocal lambda invoke \
--function-name ProcessKinesisRecords \
--payload file://input.txt outputfile.txt
-{{< / command >}}
+```
### Create a Kinesis Stream
You can create a Kinesis Stream using the [`CreateStream`](https://docs.aws.amazon.com/kinesis/latest/APIReference/API_CreateStream.html) API.
Run the following command to create a Kinesis Stream named `lambda-stream`:
-{{< command >}}
-$ awslocal kinesis create-stream \
+```bash
+awslocal kinesis create-stream \
--stream-name lambda-stream \
--shard-count 1
-{{< / command >}}
+```
You can retrieve the Stream ARN using the [`DescribeStream`](https://docs.aws.amazon.com/kinesis/latest/APIReference/API_DescribeStream.html) API.
Execute the following command:
-{{< command >}}
-$ awslocal kinesis describe-stream \
+```bash
+awslocal kinesis describe-stream \
--stream-name lambda-stream
-{{< / command >}}
+```
The following output would be retrieved:
@@ -149,25 +148,25 @@ You can save the `StreamARN` value for later use.
You can add an Event Source to your Lambda function using the [`CreateEventSourceMapping`](https://docs.aws.amazon.com/lambda/latest/dg/API_CreateEventSourceMapping.html) API.
Run the following command to add the Kinesis Stream as an Event Source to your Lambda function:
-{{< command >}}
-$ awslocal lambda create-event-source-mapping \
+```bash
+awslocal lambda create-event-source-mapping \
--function-name ProcessKinesisRecords \
--event-source arn:aws:kinesis:us-east-1:000000000000:stream/lambda-stream \
--batch-size 100 \
--starting-position LATEST
-{{< / command >}}
+```
### Test the Event Source mapping
You can test the event source mapping by adding a record to the Kinesis Stream using the [`PutRecord`](https://docs.aws.amazon.com/kinesis/latest/APIReference/API_PutRecord.html) API.
Run the following command to add a record to the Kinesis Stream:
-{{< command >}}
-$ awslocal kinesis put-record \
+```bash
+awslocal kinesis put-record \
--stream-name lambda-stream \
--partition-key 1 \
--data "Hello, this is a test."
-{{< / command >}}
+```
You can fetch the CloudWatch logs for your Lambda function reading records from the stream, using AWS CLI or LocalStack Resource Browser.
@@ -183,19 +182,17 @@ Additionally, the following parameters can be tuned:
Refer to our [Kinesis configuration documentation](https://docs.localstack.cloud/references/configuration/#kinesis) for more details on these parameters.
-{{< callout "note" >}}
+:::note
`KINESIS_MOCK_MAXIMUM_HEAP_SIZE` and `KINESIS_MOCK_INITIAL_HEAP_SIZE` are only applicable when using the Scala engine.
Future versions of LocalStack will likely default to using the `scala` engine over the less-performant `node` version currently in use.
-{{< /callout >}}
+:::
## Resource Browser
The LocalStack Web Application provides a Resource Browser for managing Kinesis Streams & Kafka Clusters.
You can access the Resource Browser by opening the LocalStack Web Application in your browser, navigating to the **Resources** section, and then clicking on **Kinesis** under the **Analytics** section.
-
-
-
+
The Resource Browser allows you to perform the following actions:
diff --git a/src/content/docs/aws/services/kinesisanalytics.md b/src/content/docs/aws/services/kinesisanalytics.md
index 53e715ad..dd722d73 100644
--- a/src/content/docs/aws/services/kinesisanalytics.md
+++ b/src/content/docs/aws/services/kinesisanalytics.md
@@ -1,15 +1,13 @@
---
title: "Kinesis Data Analytics for SQL Applications"
-linkTitle: "Kinesis Data Analytics for SQL Applications"
-description: >
- Get started with Kinesis Data Analytics for SQL Applications on LocalStack
+description: Get started with Kinesis Data Analytics for SQL Applications on LocalStack
tags: ["Ultimate"]
---
-{{< callout "warning" >}}
+:::danger
Amazon Kinesis Data Analytics for SQL Applications will be [retired on 27 January 2026](https://docs.aws.amazon.com/kinesisanalytics/latest/dev/discontinuation.html).
It will be removed from LocalStack soon after this date.
-{{< /callout >}}
+:::
## Introduction
@@ -17,7 +15,7 @@ Kinesis Data Analytics for SQL Applications is a service offered by Amazon Web S
It allows you to apply transformations, filtering, and enrichment to streaming data using standard SQL syntax.
LocalStack allows you to use the Kinesis Data Analytics APIs in your local environment.
-The supported APIs is available on our [API coverage page]({{< ref "coverage_kinesisanalytics" >}}).
+The supported APIs is available on our [API coverage page]().
## Getting started
@@ -30,10 +28,10 @@ We will demonstrate how to create a Kinesis Analytics application using AWS CLI.
You can create a Kinesis Analytics application using the [`CreateApplication`](https://docs.aws.amazon.com/kinesisanalytics/latest/APIReference/API_CreateApplication.html) API by running the following command:
-{{< command >}}
-$ awslocal kinesisanalytics create-application \
+```bash
+awslocal kinesisanalytics create-application \
--application-name test-analytics-app
-{{< /command >}}
+```
The following output would be retrieved:
@@ -51,10 +49,10 @@ The following output would be retrieved:
You can describe the application using the [`DescribeApplication`](https://docs.aws.amazon.com/kinesisanalytics/latest/APIReference/API_DescribeApplication.html) API by running the following command:
-{{< command >}}
-$ awslocal kinesisanalytics describe-application \
+```bash
+awslocal kinesisanalytics describe-application \
--application-name test-analytics-app
-{{< /command >}}
+```
The following output would be retrieved:
@@ -78,18 +76,18 @@ The following output would be retrieved:
Add tags to the application using the [`TagResource`](https://docs.aws.amazon.com/kinesisanalytics/latest/APIReference/API_TagResource.html) API by running the following command:
-{{< command >}}
-$ awslocal kinesisanalytics tag-resource \
+```bash
+awslocal kinesisanalytics tag-resource \
--resource-arn arn:aws:kinesisanalytics:us-east-1:000000000000:application/test-analytics-app \
--tags Key=test,Value=test
-{{< /command >}}
+```
You can list the tags for the application using the [`ListTagsForResource`](https://docs.aws.amazon.com/kinesisanalytics/latest/APIReference/API_ListTagsForResource.html) API by running the following command:
-{{< command >}}
-$ awslocal kinesisanalytics list-tags-for-resource \
+```bash
+awslocal kinesisanalytics list-tags-for-resource \
--resource-arn arn:aws:kinesisanalytics:us-east-1:000000000000:application/test-analytics-app
-{{< /command >}}
+```
The following output would be retrieved:
diff --git a/src/content/docs/aws/services/kms.md b/src/content/docs/aws/services/kms.md
index 47de0f4b..d2246a05 100644
--- a/src/content/docs/aws/services/kms.md
+++ b/src/content/docs/aws/services/kms.md
@@ -1,6 +1,5 @@
---
title: "Key Management Service (KMS)"
-linkTitle: "Key Management Service (KMS)"
description: Get started with Key Management Service (KMS) on LocalStack
persistence: supported
tags: ["Free"]
@@ -14,7 +13,7 @@ KMS allows you to create, delete, list, and update aliases, friendly names for y
You can check [the official AWS documentation](https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html) to understand the basic terms and concepts used in the KMS.
LocalStack allows you to use the KMS APIs in your local environment to create, edit, and view symmetric and asymmetric KMS keys, including HMAC keys.
-The supported APIs are available on our [API coverage page]({{< ref "coverage_kms" >}}), which provides information on the extent of KMS's integration with LocalStack.
+The supported APIs are available on our [API coverage page](), which provides information on the extent of KMS's integration with LocalStack.
## Getting started
@@ -28,24 +27,24 @@ We will demonstrate how to create a simple symmetric encryption key and use it t
To generate a new key within the KMS, you can use the [`CreateKey`](https://docs.aws.amazon.com/kms/latest/APIReference/API_CreateKey.html) API.
Execute the following command to create a new key:
-{{< command >}}
-$ awslocal kms create-key
-{{ command >}}
+```bash
+awslocal kms create-key
+```
By default, this command generates a symmetric encryption key, eliminating the need for any additional arguments.
You can take a look at the `KeyId` of the freshly generated key in the output, and save it for future use.
In case the key ID is misplaced, it is possible to retrieve a comprehensive list of IDs and [Amazon Resource Names](https://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html) (ARNs) for all available keys through the following command:
-{{< command >}}
-$ awslocal kms list-keys
-{{ command >}}
+```bash
+awslocal kms list-keys
+```
Additionally, if needed, you can obtain extensive details about a specific key by providing its key ID or ARN using the subsequent command:
-{{< command >}}
-$ awslocal kms describe-key --key-id
-{{ command >}}
+```bash
+awslocal kms describe-key --key-id
+```
### Encrypt the data
@@ -54,14 +53,14 @@ For instance, let's consider encrypting "_some important stuff_".
To do so, you can use the [`Encrypt`](https://docs.aws.amazon.com/kms/latest/APIReference/API_Encrypt.html) API.
Execute the following command to encrypt the data:
-{{< command >}}
-$ awslocal kms encrypt \
+```bash
+awslocal kms encrypt \
--key-id 010a4301-4205-4df8-ae52-4c2895d47326 \
--plaintext "some important stuff" \
--output text \
--query CiphertextBlob \
| base64 --decode > my_encrypted_data
-{{ command >}}
+```
You will notice that a new file named `my_encrypted_data` has been created in your current directory.
This file contains the encrypted data, which can be decrypted using the same key.
@@ -74,13 +73,13 @@ However, with asymmetric keys the `KEY_ID` has to be specified.
Execute the following command to decrypt the data:
-{{< command >}}
-$ awslocal kms decrypt \
+```bash
+awslocal kms decrypt \
--ciphertext-blob fileb://my_encrypted_data \
--output text \
--query Plaintext \
| base64 --decode
-{{ command >}}
+```
Similar to the previous `Encrypt` operation, to retrieve the actual data, it's necessary to decode the Base64-encoded output.
To achieve this, employ the `output` and `query` parameters along with the `base64` tool as before.
@@ -95,9 +94,8 @@ some important stuff
The LocalStack Web Application provides a Resource Browser for managing KMS keys.
You can access the Resource Browser by opening the LocalStack Web Application in your browser, navigating to the **Resources** section, and then clicking on **KMS** under the **Security Identity Compliance** section.
-
-
-
+
+
The Resource Browser allows you to perform the following actions:
- **Create Key**: Create a new KMS key by specifying the **Policy**, **Key Usage**, **Tags**, **Multi Region**, **Customer Master Key Spec**, and more.
@@ -113,9 +111,9 @@ This can be useful to pre-seed a test environment and use a static `KeyId` for y
Below is a simple example to create a key with a custom `KeyId` (note that the `KeyId` should have the format of a UUID):
-{{< command >}}
-$ awslocal kms create-key --tags '[{"TagKey":"_custom_id_","TagValue":"00000000-0000-0000-0000-000000000001"}]'
-{{< / command >}}
+```bash
+awslocal kms create-key --tags '[{"TagKey":"_custom_id_","TagValue":"00000000-0000-0000-0000-000000000001"}]'
+```
The following output will be displayed:
@@ -135,21 +133,32 @@ This can be useful to pre-seed a development environment so values encrypted wit
Here is an example of using custom key material with the value being base64 encoded:
-{{< command >}}
-$ echo 'dGhpc2lzYXNlY3VyZWtleQ==' | base64 -d
-
+```bash
+echo 'dGhpc2lzYXNlY3VyZWtleQ==' | base64 -d
+```
+
+The following output will be displayed:
+
+```text
thisisasecurekey
-
-$ awslocal kms create-key --tags '[{"TagKey":"_custom_key_material_","TagValue":"dGhpc2lzYXNlY3VyZWtleQ=="}]'
-
+```
+
+You can create a key with custom key material using the following command:
+
+```bash
+awslocal kms create-key --tags '[{"TagKey":"_custom_key_material_","TagValue":"dGhpc2lzYXNlY3VyZWtleQ=="}]'
+```
+
+The following output will be displayed:
+
+```json
{
"KeyMetadata": {
"AWSAccountId": "000000000000",
"KeyId": "00000000-0000-0000-0000-000000000001",
....
}
-
-{{< / command >}}
+```
## Current Limitations
diff --git a/src/content/docs/aws/services/lakeformation.md b/src/content/docs/aws/services/lakeformation.md
index feac678b..1bc004a4 100644
--- a/src/content/docs/aws/services/lakeformation.md
+++ b/src/content/docs/aws/services/lakeformation.md
@@ -1,6 +1,5 @@
---
title: "Lake Formation"
-linkTitle: "Lake Formation"
description: Get started with Lake Formation on LocalStack
tags: ["Ultimate"]
---
@@ -11,7 +10,7 @@ Lake Formation is a managed service that allows users to build, secure, and mana
Lake Formation allows users to define and enforce fine-grained access controls, manage metadata, and discover and share data across multiple data sources.
LocalStack allows you to use the Lake Formation APIs in your local environment to register resources, grant permissions, and list resources and permissions.
-The supported APIs are available on our [API coverage page]({{< ref "coverage_lakeformation" >}}), which provides information on the extent of Lake Formation's integration with LocalStack.
+The supported APIs are available on our [API coverage page](), which provides information on the extent of Lake Formation's integration with LocalStack.
## Getting started
@@ -24,9 +23,9 @@ We will demonstrate how to register an S3 bucket as a resource in Lake Formation
Create a new S3 bucket named `test-bucket` using the `mb` command:
-{{< command >}}
-$ awslocal s3 mb s3://test-bucket
-{{ command >}}
+```bash
+awslocal s3 mb s3://test-bucket
+```
You can now register the S3 bucket as a resource in Lake Formation using the [`RegisterResource`](https://docs.aws.amazon.com/lake-formation/latest/dg/API_RegisterResource.html) API.
Create a file named `input.json` with the following content:
@@ -40,19 +39,19 @@ Create a file named `input.json` with the following content:
Run the following command to register the resource:
-{{< command >}}
+```bash
awslocal lakeformation register-resource \
--cli-input-json file://input.json
-{{ command >}}
+```
### List resources
You can list the registered resources using the [`ListResources`](https://docs.aws.amazon.com/lake-formation/latest/dg/API_ListResources.html) API.
Execute the following command to list the resources:
-{{< command >}}
+```bash
awslocal lakeformation list-resources
-{{ command >}}
+```
The following output is displayed:
@@ -94,16 +93,16 @@ Create a file named `permissions.json` with the following content:
Run the following command to grant permissions:
-{{< command >}}
-$ awslocal lakeformation grant-permissions \
+```bash
+awslocal lakeformation grant-permissions \
--cli-input-json file://check.json
-{{ command >}}
+```
### List permissions
You can list the permissions granted to a user or group using the [`ListPermissions`](https://docs.aws.amazon.com/lake-formation/latest/dg/API_ListPermissions.html) API.
Execute the following command to list the permissions:
-{{< command >}}
-$ awslocal lakeformation list-permissions
-{{ command >}}
+```bash
+awslocal lakeformation list-permissions
+```
diff --git a/src/content/docs/aws/services/lambda.md b/src/content/docs/aws/services/lambda.mdx
similarity index 60%
rename from src/content/docs/aws/services/lambda.md
rename to src/content/docs/aws/services/lambda.mdx
index 05cf56ba..553c0bda 100644
--- a/src/content/docs/aws/services/lambda.md
+++ b/src/content/docs/aws/services/lambda.mdx
@@ -1,11 +1,12 @@
---
title: "Lambda"
-linkTitle: "Lambda"
description: Get started with Lambda on LocalStack
tags: ["Free"]
persistence: supported with limitations
---
+import { Tabs, TabItem } from '@astrojs/starlight/components';
+
## Introduction
AWS Lambda is a Serverless Function as a Service (FaaS) platform that lets you run code in your preferred programming language on the AWS ecosystem.
@@ -13,7 +14,7 @@ AWS Lambda automatically scales your code to meet demand and handles server prov
AWS Lambda allows you to break down your application into smaller, independent functions that integrate seamlessly with AWS services.
LocalStack allows you to use the Lambda APIs to create, deploy, and test your Lambda functions.
-The supported APIs are available on our [Lambda coverage page]({{< ref "coverage_lambda" >}}), which provides information on the extent of Lambda's integration with LocalStack.
+The supported APIs are available on our [Lambda coverage page](), which provides information on the extent of Lambda's integration with LocalStack.
## Getting started
@@ -41,113 +42,123 @@ exports.handler = async (event) => {
Enter the following command to create a new Lambda function:
-{{< command >}}
-$ zip function.zip index.js
-$ awslocal lambda create-function \
+```bash
+zip function.zip index.js
+awslocal lambda create-function \
--function-name localstack-lambda-url-example \
--runtime nodejs18.x \
--zip-file fileb://function.zip \
--handler index.handler \
--role arn:aws:iam::000000000000:role/lambda-role
-{{< / command >}}
+```
-{{< callout "note">}}
+:::note
To create a predictable URL for the function, you can assign a custom ID by specifying the `_custom_id_` tag on the function itself.
-{{< command >}}
-$ awslocal lambda create-function \
+```bash
+awslocal lambda create-function \
--function-name localstack-lambda-url-example \
--runtime nodejs18.x \
--zip-file fileb://function.zip \
--handler index.handler \
--role arn:aws:iam::000000000000:role/lambda-role \
--tags '{"_custom_id_":"my-custom-subdomain"}'
-{{< / command >}}
+```
You must specify the `_custom_id_` tag **before** creating a Function URL.
After the URL configuration is set up, any modifications to the tag will not affect it.
LocalStack supports assigning custom IDs to both the `$LATEST` version of the function or to an existing version alias.
-{{< /callout >}}
+:::
-{{< callout >}}
+:::note
In the old Lambda provider, you could create a function with any arbitrary string as the role, such as `r1`.
However, the new provider requires the role ARN to be in the format `arn:aws:iam::000000000000:role/lambda-role` and validates it using an appropriate regex. However, it currently does not check whether the role exists.
-{{< /callout >}}
+:::
### Invoke the Function
To invoke the Lambda function, you can use the [`Invoke` API](https://docs.aws.amazon.com/lambda/latest/dg/API_Invoke.html).
Run the following command to invoke the function:
-{{< tabpane text=true persist=false >}}
- {{% tab header="AWS CLI v1" lang="shell" %}}
- {{< command >}}
- $ awslocal lambda invoke --function-name localstack-lambda-url-example \
+
+
+```bash
+awslocal lambda invoke --function-name localstack-lambda-url-example \
--payload '{"body": "{\"num1\": \"10\", \"num2\": \"10\"}" }' output.txt
- {{< /command >}}
- {{% /tab %}}
- {{% tab header="AWS CLI v2" lang="shell" %}}
- {{< command >}}
- $ awslocal lambda invoke --function-name localstack-lambda-url-example \
+```
+
+
+```bash
+awslocal lambda invoke --function-name localstack-lambda-url-example \
--cli-binary-format raw-in-base64-out \
--payload '{"body": "{\"num1\": \"10\", \"num2\": \"10\"}" }' output.txt
- {{< /command >}}
- {{% /tab %}}
-{{< /tabpane >}}
+```
+
+
### Create a Function URL
-{{< callout >}}
+:::note
[Response streaming](https://docs.aws.amazon.com/lambda/latest/dg/configuration-response-streaming.html) is currently not supported, so it will still return a synchronous/full response instead.
-{{< /callout >}}
+:::
With the Function URL property, there is now a new way to call a Lambda Function via HTTP API call using the [`CreateFunctionURLConfig` API](https://docs.aws.amazon.com/lambda/latest/dg/API_CreateFunctionUrlConfig.html).
To create a URL for invoking the function, run the following command:
-{{< command >}}
-$ awslocal lambda create-function-url-config \
+```bash
+awslocal lambda create-function-url-config \
--function-name localstack-lambda-url-example \
--auth-type NONE
-{{< / command >}}
+```
This will generate a HTTP URL that can be used to invoke the Lambda function.
The URL will be in the format `http://.lambda-url.us-east-1.localhost.localstack.cloud:4566`.
-{{< callout "note">}}
+:::note
As previously mentioned, when a Lambda Function has a `_custom_id_` tag, LocalStack sets this tag's value as the subdomain in the Function's URL.
-{{< command >}}
-$ awslocal lambda create-function-url-config \
+```bash
+awslocal lambda create-function-url-config \
--function-name localstack-lambda-url-example \
--auth-type NONE
+```
+
+The following output would be retrieved:
+
+```json
{
"FunctionUrl": "http://my-custom-subdomain.lambda-url....",
....
}
-{{< / command >}}
+```
In addition, if you pass an an existing version alias as a `Qualifier` to the request, the created URL will combine the custom ID and the alias in the form `-`.
-{{< command >}}
-$ awslocal lambda create-function-url-config \
+```bash
+awslocal lambda create-function-url-config \
--function-name localstack-lambda-url-example \
--auth-type NONE
--qualifier test-alias
+```
+
+The following output would be retrieved:
+
+```json
{
"FunctionUrl": "http://my-custom-subdomain-test-alias.lambda-url....",
....
}
-{{< / command >}}
-{{< /callout >}}
+```
+:::
### Trigger the Lambda function URL
You can now trigger the Lambda function by sending a HTTP POST request to the URL using [curl](https://curl.se/) or your REST HTTP client:
-{{< command >}}
-$ curl -X POST \
+```bash
+curl -X POST \
'http://.lambda-url.us-east-1.localhost.localstack.cloud:4566/' \
-H 'Content-Type: application/json' \
-d '{"num1": "10", "num2": "10"}'
-{{< / command >}}
+```
The following output would be retrieved:
@@ -170,48 +181,259 @@ The following event sources are supported in LocalStack:
The table below shows feature coverage for all supported event sources for the latest version of LocalStack.
-Unlike [API operation coverage]({{< ref "coverage_lambda" >}}), this table illustrates the **functional and behavioural coverage** of LocalStack's Lambda Event Source Mapping implementation.
+Unlike [API operation coverage](), this table illustrates the **functional and behavioural coverage** of LocalStack's Lambda Event Source Mapping implementation.
Where necessary, footnotes are used to provide additional context.
-{{< callout >}}
+:::note
Feature availability and coverage is categorized with the following system:
- ⭐️ Only Available in LocalStack licensed editions
- 🟢 Fully Implemented
- 🟡 Partially Implemented
- 🟠 Not Implemented
- ➖ Not Applicable (Not Supported by AWS)
-{{}}
-
-| | SQS Stream Kafka ⭐️
-|--------------------------------|-------------------------------------------------|:--------:|:----:|:---------:|:----------:|:----------:|:------------:|
-| **Parameter** | **Description** | **Standard** | **FIFO** | **Kinesis** | **DynamoDB** | **Amazon MSK** | **Self-Managed** |
-| BatchSize | Batching events by count. | 🟢 | 🟢 | 🟢 | 🟢 | 🟢 | 🟢 |
-| *Not Configurable* | Batch when ≥ 6 MB limit. | 🟠 | 🟠 | 🟠 | 🟠 | 🟢 | 🟢 |
-| MaximumBatchingWindowInSeconds | Batch by Time Window. | 🟢 | 🟢 | 🟢 | 🟢 | 🟢 | 🟢 |
-| MaximumRetryAttempts | Discard after N retries. | ➖ | ➖ | 🟢 | 🟢 | ➖ | ➖ |
-| MaximumRecordAgeInSeconds | Discard records older than time `t`. | ➖ | ➖ | 🟢 | 🟢 | ➖ | ➖ |
-| Enabled | Enabling/Disabling. | 🟢 | 🟢 | 🟢 | 🟢 | 🟢 | 🟢 |
-| FilterCriteria | Filter pattern evaluating. [^1] [^2] | 🟢 | 🟢 | 🟢 | 🟢 | 🟢 | 🟢 |
-| FunctionResponseTypes | Enabling ReportBatchItemFailures. | 🟢 | 🟢 | 🟢 | 🟢 | ➖ | ➖ |
-| BisectBatchOnFunctionError | Bisect a batch on error and retry. | ➖ | ➖ | 🟠 | 🟠 | ➖ | ➖ |
-| ScalingConfig | The scaling configuration for the event source. | 🟠 | 🟠 | ➖ | ➖ | ➖ | ➖ |
-| ParallelizationFactor | Parallel batch processing by shard. | ➖ | ➖ | 🟠 | 🟠 | ➖ | ➖ |
-| DestinationConfig.OnFailure | SQS Failure Destination. | ➖ | ➖ | 🟢 | 🟢 | 🟠 | 🟠 |
-| | SNS Failure Destination. | ➖ | ➖ | 🟢 | 🟢 | 🟠 | 🟠 |
-| | S3 Failure Destination. | ➖ | ➖ | 🟢 | 🟢 | 🟠 | 🟠 |
-| DestinationConfig.OnSuccess | Success Destinations. | ➖ | ➖ | ➖ | ➖ | ➖ | ➖ |
-| MetricsConfig | CloudWatch metrics. | 🟠 | 🟠 | 🟠 | 🟠 | 🟠 | 🟠 |
-| ProvisionedPollerConfig | Control throughput via min-max limits. | ➖ | ➖ | ➖ | ➖ | 🟠 | 🟠 |
-| StartingPosition | Position to start reading from. | ➖ | ➖ | 🟢 | 🟢 | 🟢 | 🟢 |
-| StartingPositionTimestamp | Timestamp to start reading from. | ➖ | ➖ | 🟢 | ➖ | 🟢 | 🟢 |
-| TumblingWindowInSeconds | Duration (seconds) of a processing window. | ➖ | ➖ | 🟠 | 🟠 | ➖ | ➖ |
-| Topics ⭐️ | Kafka topics to read from. | ➖ | ➖ | ➖ | ➖ | 🟢 | 🟢 |
+:::
+
+import { Table, TableHeader, TableBody, TableHead, TableRow, TableCell } from '@/components/ui/table';
+
+
+
+
+ Parameter
+ Description
+ SQS
+ Stream
+ Kafka ⭐️
+
+
+
+
+ Standard
+ FIFO
+ Kinesis
+ DynamoDB
+ Amazon MSK
+ Self-Managed
+
+
+
+
+ BatchSize
+ Batching events by count.
+ 🟢
+ 🟢
+ 🟢
+ 🟢
+ 🟢
+ 🟢
+
+
+ Not Configurable
+ Batch when ≥ 6 MB limit.
+ 🟠
+ 🟠
+ 🟠
+ 🟠
+ 🟢
+ 🟢
+
+
+ MaximumBatchingWindowInSeconds
+ Batch by Time Window.
+ 🟢
+ 🟢
+ 🟢
+ 🟢
+ 🟢
+ 🟢
+
+
+ MaximumRetryAttempts
+ Discard after N retries.
+ ➖
+ ➖
+ 🟢
+ 🟢
+ ➖
+ ➖
+
+
+ MaximumRecordAgeInSeconds
+ Discard records older than time `t`.
+ ➖
+ ➖
+ 🟢
+ 🟢
+ ➖
+ ➖
+
+
+ Enabled
+ Enabling/Disabling.
+ 🟢
+ 🟢
+ 🟢
+ 🟢
+ 🟢
+ 🟢
+
+
+ FilterCriteria
+ Filter pattern evaluating. [^1] [^2]
+ 🟢
+ 🟢
+ 🟢
+ 🟢
+ 🟢
+ 🟢
+
+
+ FunctionResponseTypes
+ Enabling ReportBatchItemFailures.
+ 🟢
+ 🟢
+ 🟢
+ 🟢
+ ➖
+ ➖
+
+
+ BisectBatchOnFunctionError
+ Bisect a batch on error and retry.
+ ➖
+ ➖
+ 🟠
+ 🟠
+ ➖
+ ➖
+
+
+ ScalingConfig
+ The scaling configuration for the event source.
+ 🟠
+ 🟠
+ ➖
+ ➖
+ ➖
+ ➖
+
+
+ ParallelizationFactor
+ Parallel batch processing by shard.
+ ➖
+ ➖
+ 🟠
+ 🟠
+ ➖
+ ➖
+
+
+ DestinationConfig.OnFailure
+ SQS Failure Destination.
+ ➖
+ ➖
+ 🟢
+ 🟢
+ 🟠
+ 🟠
+
+
+
+ SNS Failure Destination.
+ ➖
+ ➖
+ 🟢
+ 🟢
+ 🟠
+ 🟠
+
+
+
+ S3 Failure Destination.
+ ➖
+ ➖
+ 🟢
+ 🟢
+ 🟠
+ 🟠
+
+
+ DestinationConfig.OnSuccess
+ Success Destinations.
+ ➖
+ ➖
+ ➖
+ ➖
+ ➖
+ ➖
+
+
+ MetricsConfig
+ CloudWatch metrics.
+ 🟠
+ 🟠
+ 🟠
+ 🟠
+ 🟠
+ 🟠
+
+
+ ProvisionedPollerConfig
+ Control throughput via min-max limits.
+ ➖
+ ➖
+ ➖
+ ➖
+ 🟠
+ 🟠
+
+
+ StartingPosition
+ Position to start reading from.
+ ➖
+ ➖
+ 🟢
+ 🟢
+ 🟢
+ 🟢
+
+
+ StartingPositionTimestamp
+ Timestamp to start reading from.
+ ➖
+ ➖
+ 🟢
+ ➖
+ 🟢
+ 🟢
+
+
+ TumblingWindowInSeconds
+ Duration (seconds) of a processing window.
+ ➖
+ ➖
+ 🟠
+ 🟠
+ ➖
+ ➖
+
+
+ Topics ⭐️
+ Kafka topics to read from.
+ ➖
+ ➖
+ ➖
+ ➖
+ 🟢
+ 🟢
+
+
+
[^1]: Read more at [Control which events Lambda sends to your function](https://docs.aws.amazon.com/lambda/latest/dg/invocation-eventfiltering.html)
[^2]: The available Metadata properties may not have full parity with AWS depending on the event source (read more at [Understanding event filtering basics](https://docs.aws.amazon.com/lambda/latest/dg/invocation-eventfiltering.html#filtering-basics)).
-Create a [GitHub issue](https://github.com/localstack/localstack/issues/new/choose) or reach out to [LocalStack support]({{< ref "/getting-started/help-and-support" >}}) if you experience any challenges.
+Create a [GitHub issue](https://github.com/localstack/localstack/issues/new/choose) or reach out to [LocalStack support](/aws/getting-started/help-support) if you experience any challenges.
## Lambda Layers (Pro)
@@ -224,22 +446,22 @@ The Community image also allows creating, updating, and deleting Lambda Layers,
To create a Lambda Layer locally, you can use the [`PublishLayerVersion` API](https://docs.aws.amazon.com/lambda/latest/dg/API_PublishLayerVersion.html) in LocalStack.
Here's a simple example using Python:
-{{< command >}}
-$ mkdir -p /tmp/python/
-$ echo 'def util():' > /tmp/python/testlayer.py
-$ echo ' print("Output from Lambda layer util function")' >> /tmp/python/testlayer.py
-$ (cd /tmp; zip -r testlayer.zip python)
-$ LAYER_ARN=$(awslocal lambda publish-layer-version --layer-name layer1 --zip-file fileb:///tmp/testlayer.zip | jq -r .LayerVersionArn)
-{{< / command >}}
+```bash
+mkdir -p /tmp/python/
+echo 'def util():' > /tmp/python/testlayer.py
+echo ' print("Output from Lambda layer util function")' >> /tmp/python/testlayer.py
+(cd /tmp; zip -r testlayer.zip python)
+LAYER_ARN=$(awslocal lambda publish-layer-version --layer-name layer1 --zip-file fileb:///tmp/testlayer.zip | jq -r .LayerVersionArn)
+```
Next, define a Lambda function that uses our layer:
-{{< command >}}
-$ echo 'def handler(*args, **kwargs):' > /tmp/testlambda.py
-$ echo ' import testlayer; testlayer.util()' >> /tmp/testlambda.py
-$ echo ' print("Debug output from Lambda function")' >> /tmp/testlambda.py
-$ (cd /tmp; zip testlambda.zip testlambda.py)
-$ awslocal lambda create-function \
+```bash
+echo 'def handler(*args, **kwargs):' > /tmp/testlambda.py
+echo ' import testlayer; testlayer.util()' >> /tmp/testlambda.py
+echo ' print("Debug output from Lambda function")' >> /tmp/testlambda.py
+(cd /tmp; zip testlambda.zip testlambda.py)
+awslocal lambda create-function \
--function-name func1 \
--runtime python3.8 \
--role arn:aws:iam::000000000000:role/lambda-role \
@@ -247,7 +469,7 @@ $ awslocal lambda create-function \
--timeout 30 \
--zip-file fileb:///tmp/testlambda.zip \
--layers $LAYER_ARN
-{{< / command >}}
+```
Here, we've defined a Lambda function called `handler()` that imports the `util()` function from our `layer1` Lambda Layer.
We then used the [`CreateFunction` API](https://docs.aws.amazon.com/lambda/latest/dg/API_CreateFunction.html) to create this Lambda function in LocalStack, specifying the `layer1` Lambda Layer as a dependency.
@@ -269,14 +491,14 @@ This account is managed by LocalStack on AWS.
To grant access to your layer, run the following command:
-{{< command >}}
-$ aws lambda add-layer-version-permission \
+```bash
+aws lambda add-layer-version-permission \
--layer-name test-layer \
--version-number 1 \
--statement-id layerAccessFromLocalStack \
--principal 886468871268 \
--action lambda:GetLayerVersion
-{{< / command >}}
+```
Replace `test-layer` and `1` with the name and version number of your layer, respectively.
@@ -287,13 +509,13 @@ After granting access, the next time you reference the layer in one of your loca
LocalStack uses a [custom implementation](https://github.com/localstack/lambda-runtime-init/) of the
[AWS Lambda Runtime Interface Emulator](https://github.com/aws/aws-lambda-runtime-interface-emulator)
to match the behavior of AWS Lambda as closely as possible while providing additional features
-such as [hot reloading]({{< ref "hot-reloading" >}}).
+such as [hot reloading](/aws/tooling/lambda-tools/hot-reloading).
We ship our custom implementation as a Golang binary, which gets copied into each Lambda container under `/var/rapid/init`.
This init binary is used as the entry point for every Lambda container.
Our custom implementation offers additional configuration options,
but these configurations are primarily intended for LocalStack developers and could change in the future.
-The LocalStack [configuration]({{< ref "configuration" >}}) `LAMBDA_DOCKER_FLAGS` can be used to configure all Lambda containers,
+The LocalStack [configuration](/aws/capabilities/config/configuration) `LAMBDA_DOCKER_FLAGS` can be used to configure all Lambda containers,
for example `LAMBDA_DOCKER_FLAGS=-e LOCALSTACK_INIT_LOG_LEVEL=debug`.
Some noteworthy configurations include:
- `LOCALSTACK_INIT_LOG_LEVEL` defines the log level of the Golang binary.
@@ -309,23 +531,23 @@ The full list of configurations is defined in the Golang function
LocalStack provides various tools to help you develop, debug, and test your AWS Lambda functions more efficiently.
- **Hot reloading**: With Lambda hot reloading, you can continuously apply code changes to your Lambda functions without needing to redeploy them manually.
- To learn more about how to use hot reloading with LocalStack, check out our [hot reloading documentation]({{< ref "hot-reloading" >}}).
+ To learn more about how to use hot reloading with LocalStack, check out our [hot reloading documentation](/aws/capabilities/lambda-tools/hot-reloading).
- **Remote debugging**: LocalStack's remote debugging functionality allows you to attach a debugger to your Lambda function using your preferred IDE.
- To get started with remote debugging in LocalStack, see our [debugging documentation]({{< ref "debugging" >}}).
+ To get started with remote debugging in LocalStack, see our [debugging documentation](/aws/capabilities/lambda-tools/remote-debugging).
- **Lambda VS Code Extension**: LocalStack's Lambda VS Code Extension supports deploying and invoking Python Lambda functions through AWS SAM or AWS CloudFormation.
- To get started with the Lambda VS Code Extension, see our [Lambda VS Code Extension documentation]({{< ref "user-guide/lambda-tools/vscode-extension" >}}).
+ To get started with the Lambda VS Code Extension, see our [Lambda VS Code Extension documentation](/aws/tooling/lambda-tools/vscode-extension).
- **API for querying Lambda runtimes**: LocalStack offers a metadata API to query the list of Lambda runtimes via `GET http://localhost.localstack.cloud:4566/_aws/lambda/runtimes`.
It returns the [Supported Runtimes](https://docs.aws.amazon.com/lambda/latest/dg/lambda-runtimes.html) matching AWS parity (i.e., excluding deprecated runtimes) and offers additional filters for `deprecated` runtimes and `all` runtimes (`GET /_aws/lambda/runtimes?filter=all`).
## Resource Browser
-The LocalStack Web Application provides a [Resource Browser]({{< ref "/user-guide/web-application/resource-browser/" >}}) for managing Lambda resources.
+The LocalStack Web Application provides a [Resource Browser](/aws/capabilities/web-app/resource-browser) for managing Lambda resources.
You can access the Resource Browser by opening the LocalStack Web Application in your browser, navigating to the **Resources** section, and then clicking on **Lambda** under the **Compute** section.
The Resource Browser displays [Functions](https://app.localstack.cloud/resources/lambda/functions) and [Layers](https://app.localstack.cloud/resources/lambda/layers) resources.
You can click on individual resources to view their details.
-
+
The Resource Browser allows you to perform the following actions:
@@ -336,16 +558,16 @@ The Resource Browser allows you to perform the following actions:
## Migrating to Lambda v2
-{{< callout >}}
+:::note
The legacy Lambda implementation has been removed since LocalStack 3.0 (Docker `latest` since 2023-11-09).
-{{}}
+:::
As part of the [LocalStack 2.0 release](https://discuss.localstack.cloud/t/new-lambda-implementation-in-localstack-2-0/258), the Lambda provider has been migrated to `v2` (formerly known as `asf`).
With the new implementation, the following changes have been introduced:
- To run Lambda functions in LocalStack, mount the Docker socket into the LocalStack container.
Add the following Docker volume mount to your LocalStack startup configuration: `/var/run/docker.sock:/var/run/docker.sock`.
- You can find an example of this configuration in our official [`docker-compose.yml` file]({{< ref "/getting-started/installation/#starting-localstack-with-docker-compose" >}}).
+ You can find an example of this configuration in our official [`docker-compose.yml` file](/aws/getting-started/installation/#starting-localstack-with-docker-compose).
- The `v2` provider discontinues Lambda Executor Modes such as `LAMBDA_EXECUTOR=local`.
Previously, this mode was used as a fallback when the Docker socket was unavailable in the LocalStack container, but many users unintentionally used it instead of the configured `LAMBDA_EXECUTOR=docker`.
The new provider now behaves similarly to the old `docker-reuse` executor and does not require such configuration.
@@ -358,7 +580,7 @@ With the new implementation, the following changes have been introduced:
The ARM containers for compatible runtimes are based on Amazon Linux 2, and ARM-compatible hosts can create functions with the `arm64` architecture.
- Lambda functions in LocalStack resolve AWS domains, such as `s3.amazonaws.com`, to the LocalStack container.
This domain resolution is DNS-based and can be disabled by setting `DNS_ADDRESS=0`.
- For more information, refer to [Transparent Endpoint Injection]({{< ref "user-guide/tools/transparent-endpoint-injection" >}}).
+ For more information, refer to [Transparent Endpoint Injection](/aws/capabilities/networking/transparent-endpoint-injection).
Previously, LocalStack provided patched AWS SDKs to redirect AWS API calls transparently to LocalStack.
- The new provider may generate more exceptions due to invalid input.
For instance, while the old provider accepted arbitrary strings (such as `r1`) as Lambda roles when creating a function, the new provider validates role ARNs using a regular expression that requires them to be in the format `arn:aws:iam::000000000000:role/lambda-role`.
@@ -369,7 +591,7 @@ With the new implementation, the following changes have been introduced:
The configuration `LAMBDA_SYNCHRONOUS_CREATE=1` can force synchronous function creation, but it is not recommended.
- LocalStack's Lambda implementation, allows you to customize the Lambda execution environment using the [Lambda Extensions API](https://docs.aws.amazon.com/lambda/latest/dg/runtimes-extensions-api.html).
This API allows for advanced monitoring, observability, or developer tooling, providing greater control and flexibility over your Lambda functions.
- Lambda functions can also be run on hosts with [multi-architecture support]({{< ref "/references/arm64-support/#lambda-multi-architecture-support" >}}), allowing you to leverage LocalStack's Lambda API to develop and test Lambda functions with high parity.
+ Lambda functions can also be run on hosts with [multi-architecture support](), allowing you to leverage LocalStack's Lambda API to develop and test Lambda functions with high parity.
The following configuration options from the old provider are discontinued in the new provider:
@@ -416,21 +638,27 @@ However, many users inadvertently used the local executor mode instead of the in
If you encounter the following error message, you may be using the local executor mode:
-{{< tabpane lang="bash" >}}
-{{< tab header="LocalStack Logs" lang="shell" >}}
+
+
+```bash
Lambda 'arn:aws:lambda:us-east-1:000000000000:function:my-function:$LATEST' changed to failed.
Reason: Docker not available
...
raise DockerNotAvailable("Docker not available")
-{{< /tab >}}
-{{< tab header="AWS CLI" lang="shell" >}}
+```
+
+
+```bash
An error occurred (ResourceConflictException) when calling the Invoke operation (reached max retries: 0): The operation cannot be performed at this time.
The function is currently in the following state: Failed
-{{< /tab >}}
-{{< tab header="SAM" lang="shell" >}}
+```
+
+
+```bash
Error: Failed to create/update the stack: sam-app, Waiter StackCreateComplete failed: Waiter encountered a terminal failure state: For expression "Stacks[].StackStatus" we matched expected path: "CREATE_FAILED" at least once
-{{< /tab >}}
-{{< /tabpane >}}
+```
+
+
To fix this issue, add the Docker volume mount `/var/run/docker.sock:/var/run/docker.sock` to your LocalStack startup.
Refer to our [sample `docker-compose.yml` file](https://github.com/localstack/localstack/blob/master/docker-compose.yml) as an example.
@@ -438,21 +666,34 @@ Refer to our [sample `docker-compose.yml` file](https://github.com/localstack/lo
### Function in Pending state
If you receive a `ResourceConflictException` when trying to invoke a function, it is currently in a `Pending` state and cannot be executed yet.
-To wait until the function becomes `active`, you can use the following command:
-{{< command >}}
-$ awslocal lambda get-function --function-name my-function
+```bash
+awslocal lambda get-function --function-name my-function
+```
+
+The output will be similar to the following:
+
+```bash
An error occurred (ResourceConflictException) when calling the Invoke operation (reached max retries: 0):
The operation cannot be performed at this time.
The function is currently in the following state: Pending
+```
+
+To wait until the function becomes `active`, you can use the following command:
-$ awslocal lambda wait function-active-v2 --function-name my-function
-{{< / command >}}
+```bash
+awslocal lambda wait function-active-v2 --function-name my-function
+```
Alternatively, you can check the function state using the [`GetFunction` API](https://docs.aws.amazon.com/lambda/latest/dg/API_GetFunction.html):
-{{< command >}}
-$ awslocal lambda get-function --function-name my-function
+```bash
+awslocal lambda get-function --function-name my-function
+```
+
+The output will be similar to the following:
+
+```json
{
"Configuration": {
...
@@ -463,8 +704,17 @@ $ awslocal lambda get-function --function-name my-function
...
}
}
+```
-$ awslocal lambda get-function --function-name my-function
+When the function is active, the output will be similar to the following:
+
+```bash
+awslocal lambda get-function --function-name my-function
+```
+
+The output will be similar to the following:
+
+```json
{
"Configuration": {
...
@@ -474,7 +724,7 @@ $ awslocal lambda get-function --function-name my-function
...
}
}
-{{< / command >}}
+```
If the function is still in the `Pending` state, the output will include a `"State": "Pending"` field and a `"StateReason": "The function is being created."` message.
Once the function is active, the `"State"` field will change to `"Active"` and the `"LastUpdateStatus"` field will indicate the status of the last update.
diff --git a/src/content/docs/aws/services/managedblockchain.md b/src/content/docs/aws/services/managedblockchain.md
index c87ea6f0..065d0c8c 100644
--- a/src/content/docs/aws/services/managedblockchain.md
+++ b/src/content/docs/aws/services/managedblockchain.md
@@ -1,16 +1,16 @@
---
title: "Managed Blockchain (AMB)"
-linkTitle: "Managed Blockchain (AMB)"
-description: >
- Get started with Managed Blockchain (AMB) on LocalStack
+description: Get started with Managed Blockchain (AMB) on LocalStack
tags: ["Ultimate"]
---
+## Introduction
+
Managed Blockchain (AMB) is a managed service that enables the creation and management of blockchain networks, such as Hyperledger Fabric, Bitcoin, Polygon and Ethereum.
Blockchain enables the development of applications in which multiple entities can conduct transactions and exchange data securely and transparently, eliminating the requirement for a central, trusted authority.
LocalStack allows you to use the AMB APIs to develop and deploy decentralized applications in your local environment.
-The supported APIs are available on our [API Coverage Page]({{< ref "coverage_managedblockchain" >}}), which provides information on the extent of AMB integration with LocalStack.
+The supported APIs are available on our [API Coverage Page](), which provides information on the extent of AMB integration with LocalStack.
## Getting started
@@ -24,8 +24,8 @@ We will demonstrate how to create a blockchain network, a node, and a proposal.
You can create a blockchain network using the [`CreateNetwork`](https://docs.aws.amazon.com/managed-blockchain/latest/APIReference/API_CreateNetwork.html) API.
Run the following command to create a network named `OurBlockchainNet` which uses the Hyperledger Fabric with the following configuration:
-{{< command >}}
-$ awslocal managedblockchain create-network \
+```bash
+awslocal managedblockchain create-network \
--cli-input-json '{
"Name": "OurBlockchainNet",
"Description": "OurBlockchainNetDesc",
@@ -63,13 +63,16 @@ $ awslocal managedblockchain create-network \
}
}
}'
-
+```
+
+The output will be similar to the following:
+
+```json
{
"NetworkId": "n-X24AF1AK2GC6MDW11HYW5I5DQC",
"MemberId": "m-6VWBWHP2Y15F7TQ2DS093RTCW2"
}
-
-{{< / command >}}
+```
Copy the `NetworkId` and `MemberId` values from the output of the above command, as we will need them in the next step.
@@ -78,8 +81,8 @@ Copy the `NetworkId` and `MemberId` values from the output of the above command,
You can create a node using the [`CreateNode`](https://docs.aws.amazon.com/managed-blockchain/latest/APIReference/API_CreateNode.html) API.
Run the following command to create a node with the following configuration:
-{{< command >}}
-$ awslocal managedblockchain create-node \
+```bash
+awslocal managedblockchain create-node \
--node-configuration '{
"InstanceType": "bc.t3.small",
"AvailabilityZone": "us-east-1a",
@@ -100,12 +103,15 @@ $ awslocal managedblockchain create-node \
}' \
--network-id n-X24AF1AK2GC6MDW11HYW5I5DQC \
--member-id m-6VWBWHP2Y15F7TQ2DS093RTCW2
-
+```
+
+The output will be similar to the following:
+
+```json
{
"NodeId": "nd-77K8AI0O5BEQD1IW4L8OGKMXV7"
}
-
-{{< / command >}}
+```
Replace the `NetworkId` and `MemberId` values in the above command with the values you copied in the previous step.
@@ -114,16 +120,19 @@ Replace the `NetworkId` and `MemberId` values in the above command with the valu
You can create a proposal using the [`CreateProposal`](https://docs.aws.amazon.com/managed-blockchain/latest/APIReference/API_CreateProposal.html) API.
Run the following command to create a proposal with the following configuration:
-{{< command >}}
-$ awslocal managedblockchain create-proposal \
+```bash
+awslocal managedblockchain create-proposal \
--actions "Invitations=[{Principal=000000000000}]" \
--network-id n-X24AF1AK2GC6MDW11HYW5I5DQC \
--member-id m-6VWBWHP2Y15F7TQ2DS093RTCW2
-
+```
+
+The output will be similar to the following:
+
+```json
{
"ProposalId": "p-NK0PSLDPETJQX01Q4OLBRHP8CZ"
}
-
-{{< / command >}}
+```
Replace the `NetworkId` and `MemberId` values in the above command with the values you copied in the previous step.
diff --git a/src/content/docs/aws/services/mediastore.md b/src/content/docs/aws/services/mediastore.md
index 1e4c0704..eff6dfa9 100644
--- a/src/content/docs/aws/services/mediastore.md
+++ b/src/content/docs/aws/services/mediastore.md
@@ -1,6 +1,5 @@
---
title: Elemental MediaStore
-linkTitle: Elemental MediaStore
description: Get started with Elemental MediaStore on LocalStack
tags: ["Ultimate"]
---
@@ -12,7 +11,7 @@ It provides a reliable way to store, manage, and serve media assets, such as aud
MediaStore seamlessly integrates with other AWS services like Elemental MediaConvert, Elemental MediaLive, Elemental MediaPackage, and CloudFront.
LocalStack allows you to use the Elemental MediaStore APIs as a high-performance storage solution for media content in your local environment.
-The supported APIs are available on our [API Coverage Page]({{< ref "coverage_mediastore" >}}), which provides information on the extent of Elemental MediaStore integration with LocalStack.
+The supported APIs are available on our [API Coverage Page](), which provides information on the extent of Elemental MediaStore integration with LocalStack.
## Getting started
@@ -26,9 +25,9 @@ We will demonstrate how you can create a MediaStore container, upload an asset,
You can create a container using the [`CreateContainer`](https://docs.aws.amazon.com/mediastore/latest/apireference/API_CreateContainer.html) API.
Run the following command to create a container and retrieve the the `Endpoint` value which should be used in subsequent requests:
-{{< command >}}
-$ awslocal mediastore create-container --container-name mycontainer
-{{< / command >}}
+```bash
+awslocal mediastore create-container --container-name mycontainer
+```
You should see the following output:
@@ -50,13 +49,13 @@ This action will transfer the file to the specified path, `/myfolder/myfile.txt`
Provide the `endpoint` obtained in the previous step for the operation to be successful.
Run the following command to upload the file:
-{{< command >}}
-$ awslocal mediastore-data put-object \
+```bash
+awslocal mediastore-data put-object \
--endpoint http://mediastore-mycontainer.mediastore.localhost.localstack.cloud:4566 \
--body myfile.txt \
--path /myfolder/myfile.txt \
--content-type binary/octet-stream
-{{< / command >}}
+```
You should see the following output:
@@ -74,12 +73,12 @@ In this process, you need to specify the endpoint, the path for downloading the
The downloaded file will then be accessible at the specified output path.
Run the following command to download the file:
-{{< command >}}
-$ awslocal mediastore-data get-object \
+```bash
+awslocal mediastore-data get-object \
--endpoint http://mediastore-mycontainer.mediastore.localhost.localstack.cloud:4566 \
--path /myfolder/myfile.txt \
/tmp/out.txt
-{{< / command >}}
+```
You should see the following output:
@@ -96,4 +95,4 @@ You should see the following output:
## Troubleshooting
The Elemental MediaStore service requires the use of a custom HTTP/HTTPS endpoint.
-In case you encounter any issues, please consult our [Networking documentation]({{< ref "references/network-troubleshooting" >}}) for assistance.
+In case you encounter any issues, please consult our [Networking documentation](/aws/capabilities/networking/) for assistance.
diff --git a/src/content/docs/aws/services/memorydb.md b/src/content/docs/aws/services/memorydb.md
index 18222654..81b44ea1 100644
--- a/src/content/docs/aws/services/memorydb.md
+++ b/src/content/docs/aws/services/memorydb.md
@@ -1,6 +1,5 @@
---
title: "MemoryDB for Redis"
-linkTitle: "MemoryDB for Redis"
tags: ["Ultimate"]
description: Get started with MemoryDB on LocalStack
---
@@ -11,7 +10,7 @@ MemoryDB is a fully managed, Redis-compatible, in-memory database tailored for w
It streamlines the deployment and management of in-memory databases within the AWS cloud environment, acting as a replacement for using a cache in front of a database for improved durability and performance.
LocalStack provides support for the main MemoryDB APIs surrounding cluster creation, allowing developers to utilize the MemoryDB functionalities in their local development environment.
-The supported APIs are available on our [API Coverage Page]({{< ref "coverage_memorydb" >}}), which provides information on the extent of MemoryDB's integration with LocalStack.
+The supported APIs are available on our [API Coverage Page](), which provides information on the extent of MemoryDB's integration with LocalStack.
## Getting started
@@ -25,42 +24,46 @@ We will demonstrate how you can create a MemoryDB cluster and connect to it.
You can create a MemoryDB cluster using the [`CreateCluster`](https://docs.aws.amazon.com/memorydb/latest/APIReference/API_CreateCluster.html) API.
Run the following command to create a cluster:
-{{< command >}}
-$ awslocal memorydb create-cluster \
+```bash
+awslocal memorydb create-cluster \
--cluster-name my-redis-cluster \
--node-type db.t4g.small \
--acl-name open-access
-{{< /command>}}
+```
Once it becomes available, you will be able to use the cluster endpoint for Redis operations.
Run the following command to retrieve the cluster endpoint using the [`DescribeClusters`](https://docs.aws.amazon.com/memorydb/latest/APIReference/API_DescribeClusters.html) API:
-{{< command >}}
-$ awslocal memorydb describe-clusters --query "Clusters[0].ClusterEndpoint"
+```bash
+awslocal memorydb describe-clusters --query "Clusters[0].ClusterEndpoint"
+```
+
+The output will be similar to the following:
+
+```json
{
"Address": "127.0.0.1",
"Port": 36739
}
-{{< /command >}}
+```
-The cache cluster uses a random port of the [external service port range]({{< ref "external-ports" >}}) in regular execution and a port between 36739 and 46738 in container mode.
+The cache cluster uses a random port of the [external service port range]() in regular execution and a port between 36739 and 46738 in container mode.
Use this port number to connect to the Redis instance using the `redis-cli` command line tool:
-{{< command >}}
-$ redis-cli -p 4510 ping
+```bash
+redis-cli -p 4510 ping
PONG
-$ redis-cli -p 4510 set foo bar
+redis-cli -p 4510 set foo bar
OK
-$ redis-cli -p 4510 get foo
+redis-cli -p 4510 get foo
"bar"
-{{< / command >}}
+```
You can also check the cluster configuration using the [`cluster nodes`](https://redis.io/commands/cluster-nodes) command:
-{{< command >}}
-$ redis-cli -c -p 4510 cluster nodes
-...
-{{< / command >}}
+```bash
+redis-cli -c -p 4510 cluster nodes
+```
## Container mode
diff --git a/src/content/docs/aws/services/mq.md b/src/content/docs/aws/services/mq.md
index 86ab8fe5..1940836b 100644
--- a/src/content/docs/aws/services/mq.md
+++ b/src/content/docs/aws/services/mq.md
@@ -1,6 +1,5 @@
---
title: "MQ"
-linkTitle: "MQ"
description: Get started with MQ on LocalStack
tags: ["Base"]
---
@@ -12,7 +11,7 @@ It facilitates the exchange of messages between various components of distribute
AWS MQ supports popular messaging protocols like MQTT, AMQP, and STOMP, making it suitable for a wide range of messaging use cases.
LocalStack allows you to use the MQ APIs to implement pub/sub messaging, request/response patterns, or distributed event-driven architectures in your local environment.
-The supported APIs are available on our [API Coverage Page]({{< ref "coverage_mq" >}}), which provides information on the extent of MQ integration with LocalStack.
+The supported APIs are available on our [API Coverage Page](), which provides information on the extent of MQ integration with LocalStack.
## Getting started
@@ -26,8 +25,8 @@ We will demonstrate how to create an MQ broker and send a message to a sample qu
You can create a broker using the [`CreateBroker`](https://docs.aws.amazon.com/amazon-mq/latest/api-reference/brokers.html#brokerspost) API.
Run the following command to create a broker named `test-broker` with the following configuration:
-{{< command >}}
-$ awslocal mq create-broker \
+```bash
+awslocal mq create-broker \
--broker-name test-broker \
--deployment-mode SINGLE_INSTANCE \
--engine-type ACTIVEMQ \
@@ -36,22 +35,29 @@ $ awslocal mq create-broker \
--auto-minor-version-upgrade \
--publicly-accessible \
--users='{"ConsoleAccess": true, "Groups": ["testgroup"],"Password": "QXwV*$iUM9USHnVv&!^7s3c@", "Username": "admin"}'
-
+```
+
+The output will be similar to the following:
+
+```json
{
"BrokerArn": "arn:aws:mq:us-east-1:000000000000:broker:test-broker:b-f503abb7-66bc-47fb-b1a9-8d8c51ef6545",
"BrokerId": "b-f503abb7-66bc-47fb-b1a9-8d8c51ef6545"
}
-
-{{< / command >}}
+```
### Describe the broker
You can use the [`DescribeBroker`](https://docs.aws.amazon.com/amazon-mq/latest/api-reference/brokers.html#brokersget) API to get more detailed information about the broker.
Run the following command to get information about the broker we created above:
-{{< command >}}
-$ awslocal mq describe-broker --broker-id
-
+```bash
+awslocal mq describe-broker --broker-id
+```
+
+The output will be similar to the following:
+
+```json
b-f503abb7-66bc-47fb-b1a9-8d8c51ef6545
{
"BrokerArn": "arn:aws:mq:us-east-1:000000000000:broker:test-broker:b-f503abb7-66bc-47fb-b1a9-8d8c51ef6545",
@@ -73,26 +79,23 @@ b-f503abb7-66bc-47fb-b1a9-8d8c51ef6545
"HostInstanceType": "mq.t2.micro",
"Tags": {}
}
-
-{{< / command >}}
+```
### Send a message
Now that the broker is actively listening, we can use curl to send a message to a sample queue.
Run the following command to send a message to the `orders.input` queue:
-{{< command >}}
-$ curl -XPOST -d "body=message" http://admin:admin@localhost:4513/api/message\?destination\=queue://orders.input
-{{< / command >}}
+```bash
+curl -XPOST -d "body=message" http://admin:admin@localhost:4513/api/message\?destination\=queue://orders.input
+```
## Resource Browser
The LocalStack Web Application provides a Resource Browser for managing MQ brokers.
You can access the Resource Browser by opening the LocalStack Web Application in your browser, navigating to the **Resource Browser** section, and then clicking on **MQ** under the **App Integration** section.
-
-
-
+
The Resource Browser allows you to perform the following actions:
diff --git a/src/content/docs/aws/services/msk.md b/src/content/docs/aws/services/msk.md
index e1c7e33b..aa4845fb 100644
--- a/src/content/docs/aws/services/msk.md
+++ b/src/content/docs/aws/services/msk.md
@@ -1,6 +1,5 @@
---
title: "Managed Streaming for Kafka (MSK)"
-linkTitle: "Managed Streaming for Kafka (MSK)"
description: Get started with Managed Streaming for Kafka (MSK) on LocalStack
tags: ["Ultimate"]
persistence: supported with limitations
@@ -13,7 +12,7 @@ MSK offers a centralized platform to facilitate seamless communication between v
MSK also features automatic scaling and built-in monitoring, allowing users to build robust, high-throughput data pipelines.
LocalStack allows you to use the MSK APIs in your local environment to spin up Kafka clusters on the local machine, create topics for exchanging messages, and define event source mappings that trigger Lambda functions when messages are received on a certain topic.
-The supported APIs are available on our [API coverage page]({{< ref "coverage_kafka" >}}), which provides information on the extent of MSK's integration with LocalStack.
+The supported APIs are available on our [API coverage page](), which provides information on the extent of MSK's integration with LocalStack.
## Getting started
@@ -43,13 +42,13 @@ Create the file and add the following content to it:
Run the following command to create the cluster:
-{{< command >}}
-$ awslocal kafka create-cluster \
+```bash
+awslocal kafka create-cluster \
--cluster-name "EventsCluster" \
--broker-node-group-info file://brokernodegroupinfo.json \
--kafka-version "2.8.0" \
--number-of-broker-nodes 3
-{{< / command >}}
+```
The output of the command looks similar to this:
@@ -65,10 +64,10 @@ The cluster creation process might take a few minutes.
You can describe the cluster using the [`DescribeCluster`](https://docs.aws.amazon.com/msk/1.0/apireference/clusters.html#DescribeCluster) API.
Run the following command, replacing `ClusterArn` with the Amazon Resource Name (ARN) you obtained above when you created cluster.
-{{< command >}}
-$ awslocal kafka describe-cluster \
+```bash
+awslocal kafka describe-cluster \
--cluster-arn "arn:aws:kafka:us-east-1:000000000000:cluster/EventsCluster/b154d18a-8ecb-4691-96b2-50348357fc2f-25"
-{{< / command >}}
+```
The output of the command looks similar to this:
@@ -104,22 +103,22 @@ To use LocalStack MSK, you can download and utilize the Kafka command line inter
To download Apache Kafka, execute the following commands.
-{{< command >}}
-$ wget https://archive.apache.org/dist/kafka/2.8.0/kafka_2.12-2.8.0.tgz
-$ tar -xzf kafka_2.12-2.8.0.tgz
-{{< / command >}}
+```bash
+wget https://archive.apache.org/dist/kafka/2.8.0/kafka_2.12-2.8.0.tgz
+tar -xzf kafka_2.12-2.8.0.tgz
+```
Navigate to the **kafka_2.12-2.8.0** directory.
Execute the following command, replacing `ZookeeperConnectString` with the value you saved after running the [`DescribeCluster`](https://docs.aws.amazon.com/msk/1.0/apireference/clusters.html#DescribeCluster) API:
-{{< command >}}
-$ bin/kafka-topics.sh \
+```bash
+bin/kafka-topics.sh \
--create \
--zookeeper localhost:4510 \
--replication-factor 1 \
--partitions 1 \
--topic LocalMSKTopic
-{{< / command >}}
+```
After executing the command, your output should resemble the following:
@@ -135,13 +134,13 @@ Create a folder named `/tmp` on the client machine, and navigate to the bin fold
Run the following command, replacing `java_home` with the path of your `java_home`.
For this instance, the java_home path is `/Library/Internet\ Plug-Ins/JavaAppletPlugin.plugin/Contents/Home`.
-{{< callout >}}
+:::note
The following step is optional and may not be required, depending on the operating system environment being used.
-{{< /callout >}}
+:::
-{{< command >}}
-$ cp java_home/lib/security/cacerts /tmp/kafka.client.truststore.jks
-{{< / command >}}
+```bash
+cp java_home/lib/security/cacerts /tmp/kafka.client.truststore.jks
+```
While you are still in the `bin` folder of the Apache Kafka installation on the client machine, create a text file named `client.properties` with the following contents:
@@ -151,10 +150,10 @@ ssl.truststore.location=/tmp/kafka.client.truststore.jks
Run the following command, replacing `ClusterArn` with the Amazon Resource Name (ARN) you have.
-{{< command >}}
-$ awslocal kafka get-bootstrap-brokers \
+```bash
+awslocal kafka get-bootstrap-brokers \
--cluster-arn ClusterArn
-{{< / command >}}
+```
To proceed with the following commands, save the value associated with the string named `BootstrapBrokerStringTls` from the JSON result obtained from the previous command.
It should look like this:
@@ -167,12 +166,12 @@ It should look like this:
Now, navigate to the bin folder and run the next command, replacing `BootstrapBrokerStringTls` with the value you obtained:
-{{< command >}}
-$ ./kafka-console-producer.sh \
+```bash
+./kafka-console-producer.sh \
--broker-list BootstrapBrokerStringTls \
--producer.config client.properties \
--topic LocalMSKTopic
-{{< / command >}}
+```
To send messages to your Apache Kafka cluster, enter any desired message and press Enter.
You can repeat this process twice or thrice, sending each line as a separate message to the Kafka cluster.
@@ -182,13 +181,13 @@ Keep the connection to the client machine open, and open a separate connection t
In this new connection, navigate to the `bin` folder and run a command, replacing `BootstrapBrokerStringTls` with the value you saved earlier.
This command will allow you to interact with the Apache Kafka cluster using the saved value for secure communication.
-{{< command >}}
-$ ./kafka-console-consumer.sh \
+```bash
+./kafka-console-consumer.sh \
--bootstrap-server BootstrapBrokerStringTls \
--consumer.config client.properties \
--topic LocalMSKTopic \
--from-beginning
-{{< / command >}}
+```
You should start seeing the messages you entered earlier when you used the console producer command.
These messages are TLS encrypted in transit.
@@ -201,13 +200,13 @@ The configuration for this mapping sets the starting position of the topic to `L
Run the following command to use the [`CreateEventSourceMapping`](https://docs.aws.amazon.com/lambda/latest/dg/API_CreateEventSourceMapping.html) API by specifying the Event Source ARN, the topic name, the starting position, and the Lambda function name.
-{{< command >}}
-$ awslocal lambda create-event-source-mapping \
+```bash
+awslocal lambda create-event-source-mapping \
--event-source-arn arn:aws:kafka:us-east-1:000000000000:cluster/EventsCluster \
--topics LocalMSKTopic \
--starting-position LATEST \
--function-name my-kafka-function
-{{< / command >}}
+```
Upon successful completion of the operation to create the Lambda Event Source Mapping, you can expect the following response:
@@ -240,24 +239,22 @@ You can delete the local MSK cluster using the [`DeleteCluster`](https://docs.aw
To do so, you must first obtain the ARN of the cluster you want to delete.
Run the following command to list all the clusters in the region:
-{{< command >}}
-$ awslocal kafka list-clusters --region us-east-1
-{{< / command >}}
+```bash
+awslocal kafka list-clusters --region us-east-1
+```
To initiate the deletion of a cluster, select the corresponding `ClusterARN` from the list of clusters, and then execute the following command:
-{{< command >}}
+```bash
awslocal kafka delete-cluster --cluster-arn ClusterArn
-{{< / command >}}
+```
## Resource Browser
The LocalStack Web Application provides a Resource Browser for managing MSK clusters.
You can access the Resource Browser by opening the LocalStack Web Application in your browser, navigating to the **Resources** section, and then clicking on **Kafka** under the **Analytics** section.
-
-
-
+
The Resource Browser allows you to perform the following actions:
diff --git a/src/content/docs/aws/services/mwaa.md b/src/content/docs/aws/services/mwaa.md
index 457b9801..abeaf3e8 100644
--- a/src/content/docs/aws/services/mwaa.md
+++ b/src/content/docs/aws/services/mwaa.md
@@ -1,8 +1,6 @@
---
title: "Managed Workflows for Apache Airflow (MWAA)"
-linkTitle: "Managed Workflows for Apache Airflow (MWAA)"
-description: >
- Get started with Managed Workflows for Apache Airflow (MWAA) on LocalStack
+description: Get started with Managed Workflows for Apache Airflow (MWAA) on LocalStack
tags: ["Ultimate"]
---
@@ -12,7 +10,7 @@ Managed Workflows for Apache Airflow (MWAA) is a fully managed service by AWS th
MWAA leverages the familiar Airflow features and integrations while integrating with S3, Glue, Redshift, Lambda, and other AWS services to build data pipelines and orchestrate data processing workflows in the cloud.
LocalStack allows you to use the MWAA APIs in your local environment to allow the setup and operation of data pipelines.
-The supported APIs are available on the [API coverage page]({{< ref "coverage_mwaa" >}}).
+The supported APIs are available on the [API coverage page]().
## Getting started
@@ -26,34 +24,34 @@ We will demonstrate how to create an Airflow environment and access the Airflow
Create a S3 bucket that will be used for Airflow resources.
Run the following command to create a bucket using the [`mb`](https://docs.aws.amazon.com/cli/latest/reference/s3/mb.html) command.
-{{< command >}}
-$ awslocal s3 mb s3://my-mwaa-bucket
-{{< /command >}}
+```bash
+awslocal s3 mb s3://my-mwaa-bucket
+```
### Create an Airflow environment
You can now create an Airflow environment, using the [`CreateEnvironment`](https://docs.aws.amazon.com/mwaa/latest/API/API_CreateEnvironment.html) API.
Run the following command, by specifying the bucket ARN we created earlier:
-{{< command >}}
-$ awslocal mwaa create-environment --dag-s3-path /dags \
+```bash
+awslocal mwaa create-environment --dag-s3-path /dags \
--execution-role-arn arn:aws:iam::000000000000:role/airflow-role \
--network-configuration {} \
--source-bucket-arn arn:aws:s3:::my-mwaa-bucket \
--airflow-version 2.10.1 \
--airflow-configuration-options agent.code=007,agent.name=bond \
--name my-mwaa-env
-{{< /command >}}
+```
### Access the Airflow UI
The Airflow UI can be accessed via the URL in the `WebserverUrl` attribute of the response of the `GetEnvironment` operation.
The username and password are always set to `localstack`.
-{{< command >}}
-$ awslocal mwaa get-environment --name my-mwaa-env --query Environment.WebserverUrl
+```bash
+awslocal mwaa get-environment --name my-mwaa-env --query Environment.WebserverUrl
"http://localhost.localstack.cloud:4510"
-{{< /command >}}
+```
LocalStack also prints this information in the logs:
@@ -91,12 +89,12 @@ Just upload your DAGs to the designated S3 bucket path, configured by the `DagS3
For example, the command below uploads a sample DAG named `sample_dag.py` to your S3 bucket named `my-mwaa-bucket`:
-{{< command >}}
-$ awslocal s3 cp sample_dag.py s3://my-mwaa-bucket/dags
-{{< /command >}}
+```bash
+awslocal s3 cp sample_dag.py s3://my-mwaa-bucket/dags
+```
LocalStack syncs new and changed objects in the S3 bucket to the Airflow container every 30 seconds.
-The polling interval can be changed using the [`MWAA_S3_POLL_INTERVAL`]({{< ref "configuration#mwaa" >}}) config option.
+The polling interval can be changed using the [`MWAA_S3_POLL_INTERVAL`](/aws/capabilities/config/configuration/#mwaa) config option.
## Installing custom plugins
@@ -105,9 +103,9 @@ LocalStack seamlessly supports plugins packaged according to [AWS specifications
To integrate your custom plugins into the MWAA environment, upload the packaged `plugins.zip` file to the designated S3 bucket path:
-{{< command >}}
-$ awslocal s3 cp plugins.zip s3://my-mwaa-bucket/plugins.zip
-{{< /command >}}
+```bash
+awslocal s3 cp plugins.zip s3://my-mwaa-bucket/plugins.zip
+```
## Installing Python dependencies
@@ -124,9 +122,9 @@ botocore==1.20.54
Once you have your `requirements.txt` file ready, upload it to the designated S3 bucket, configured for use by the MWAA environment.
Make sure to upload the file to `/requirements.txt` in the bucket:
-{{< command >}}
-$ awslocal s3 cp requirements.txt s3://my-mwaa-bucket/requirements.txt
-{{< /command >}}
+```bash
+awslocal s3 cp requirements.txt s3://my-mwaa-bucket/requirements.txt
+```
After the upload, the environment will be automatically updated, and your Apache Airflow setup will be equipped with the new dependencies.
It is important to note that, unlike [AWS](https://docs.aws.amazon.com/mwaa/latest/userguide/connections-packages.html), LocalStack does not install any provider packages by default.
@@ -143,9 +141,7 @@ This information must be explicitly passed in operators, hooks, and sensors.
The LocalStack Web Application provides a Resource Browser for managing MWAA Environments.
You can access the Resource Browser by opening the LocalStack Web Application in your browser, navigating to the **Resource Browser** section, and then clicking on **MWAA** under the **App Integration** section.
-
-
-
+
The Resource Browser allows you to perform the following actions:
diff --git a/src/content/docs/aws/services/neptune.md b/src/content/docs/aws/services/neptune.md
index 58f84699..dfca2833 100644
--- a/src/content/docs/aws/services/neptune.md
+++ b/src/content/docs/aws/services/neptune.md
@@ -1,8 +1,6 @@
---
title: "Neptune"
-linkTitle: "Neptune"
-description: >
- Get started with Neptune on LocalStack
+description: Get started with Neptune on LocalStack
tags: ["Ultimate"]
---
@@ -13,7 +11,7 @@ It is designed for storing and querying highly connected data for applications t
Neptune supports popular graph query languages like Gremlin and SPARQL, making it compatible with a wide range of graph applications and tools.
LocalStack allows you to use the Neptune APIs in your local environment to support both property graph and RDF graph models.
-The supported APIs are available on our [API coverage page]({{< ref "coverage_neptune" >}}), which provides information on the extent of Neptune's integration with LocalStack.
+The supported APIs are available on our [API coverage page](), which provides information on the extent of Neptune's integration with LocalStack.
The following versions of Neptune engine are supported by LocalStack:
@@ -52,11 +50,11 @@ We will demonstrate the following with AWS CLI & Python:
To create a Neptune cluster you can use the [`CreateDBCluster`](https://docs.aws.amazon.com/neptune/latest/userguide/api-clusters.html#CreateDBCluster) API.
Run the following command to create a Neptune cluster:
-{{< command >}}
-$ awslocal neptune create-db-cluster \
+```bash
+awslocal neptune create-db-cluster \
--engine neptune \
--db-cluster-identifier my-neptune-db
-{{< / command >}}
+```
You should see the following output:
@@ -77,13 +75,13 @@ You should see the following output:
To add an instance you can use the [`CreateDBInstance`](https://docs.aws.amazon.com/neptune/latest/userguide/api-instances.html#CreateDBInstance) API.
Run the following command to create a Neptune instance:
-{{< command >}}
-$ awslocal neptune create-db-instance \
+```bash
+awslocal neptune create-db-instance \
--db-cluster-identifier my-neptune-db \
--db-instance-identifier my-neptune-instance \
--engine neptune \
--db-instance-class db.t3.medium
-{{< / command >}}
+```
In LocalStack the `Endpoint` for the `DBCluster` and the `Endpoint.Address` of the `DBInstance` will be the same and can be used to connect to the graph database.
@@ -144,7 +142,7 @@ if __name__ == '__main__':
Amazon Neptune resources with IAM DB authentication enabled require all requests to use AWS Signature Version 4.
-When LocalStack starts with [IAM enforcement enabled]({{< ref "/user-guide/security-testing" >}}), the Neptune database checks user permissions before granting access.
+When LocalStack starts with [IAM enforcement enabled](/aws/capabilities/security-testing/iam-policy-enforcement), the Neptune database checks user permissions before granting access.
The following Gremlin query actions are available for database engine versions `1.3.2.0` and higher:
```json
@@ -159,37 +157,44 @@ When LocalStack starts with [IAM enforcement enabled]({{< ref "/user-guide/secur
Start LocalStack with `LOCALSTACK_ENFORCE_IAM=1` to create a Neptune cluster with IAM DB authentication enabled.
-{{< command >}}
-$ LOCALSTACK_ENFORCE_IAM=1 localstack start
-{{< /command >}}
+```bash
+LOCALSTACK_ENFORCE_IAM=1 localstack start
+```
You can then create a cluster.
-{{< command >}}
-$ awslocal neptune create-db-cluster \
+```bash
+awslocal neptune create-db-cluster \
--engine neptune \
--db-cluster-identifier myneptune-db \
--enable-iam-database-authentication
-{{< /command >}}
+```
After the cluster is deployed, the Gremlin server will reject unsigned queries.
-{{< command >}}
-$ curl "https://localhost.localstack.cloud:4510/gremlin?gremlin=g.V()" -v
-...
+```bash
+curl "https://localhost.localstack.cloud:4510/gremlin?gremlin=g.V()" -v
+```
+
+The output will be similar to the following:
+
+```bash
- Request completely sent off
< HTTP/1.1 403 Forbidden
- no chunk, no close, no size.
Assume close to signal end
...
-
-{{< /command >}}
+```
Use the Python package [awscurl](https://pypi.org/project/awscurl/) to make your first signed query.
-{{< command >}}
-$ awscurl "https://localhost.localstack.cloud:4510/gremlin?gremlin=g.V().count()" -H "Accept: application/json" | jq .
-
+```bash
+awscurl "https://localhost.localstack.cloud:4510/gremlin?gremlin=g.V().count()" -H "Accept: application/json" | jq .
+```
+
+The output will be similar to the following:
+
+```json
{
"requestId": "729c3e7b-50b3-4df7-b0b6-d1123c4e81df",
"status": {
@@ -216,25 +221,23 @@ $ awscurl "https://localhost.localstack.cloud:4510/gremlin?gremlin=g.V().count()
}
}
}
-
-{{< /command >}}
+```
-{{< callout "note" >}}
+:::note
If Gremlin Server is installed in your LocalStack environment, you must delete it and restart LocalStack.
-You can find your LocalStack volume location on the [LocalStack filesystem documentation]({{< ref "/references/filesystem/#localstack-volume" >}}).
-{{< command >}}
-$ rm -rf /lib/tinkerpop
-{{< /command >}}
-{{< /callout >}}
+You can find your LocalStack volume location on the [LocalStack filesystem documentation](/aws/capabilities/config/filesystem/#localstack-volume).
+
+```bash
+rm -rf /lib/tinkerpop
+```
+:::
## Resource Browser
The LocalStack Web Application provides a Resource Browser for managing Neptune databases and clusters.
You can access the Resource Browser by opening the LocalStack Web Application in your browser, navigating to the **Resources** section, and then clicking on **Neptune** under the **Database** section.
-
-
-
+
The Resource Browser allows you to perform the following actions:
diff --git a/src/content/docs/aws/services/opensearch.md b/src/content/docs/aws/services/opensearch.mdx
similarity index 83%
rename from src/content/docs/aws/services/opensearch.md
rename to src/content/docs/aws/services/opensearch.mdx
index 9a94b81d..0bb8b7e1 100644
--- a/src/content/docs/aws/services/opensearch.md
+++ b/src/content/docs/aws/services/opensearch.mdx
@@ -1,8 +1,6 @@
---
title: "OpenSearch Service"
-linkTitle: "OpenSearch Service"
-description: >
- Get started with OpenSearch Service on LocalStack
+description: Get started with OpenSearch Service on LocalStack
tags: ["Free"]
---
@@ -12,7 +10,7 @@ OpenSearch Service is an open-source search and analytics engine, offering devel
OpenSearch Service also offers log analytics, real-time application monitoring, and clickstream analysis.
LocalStack allows you to use the OpenSearch Service APIs in your local environment to create, manage, and operate the OpenSearch clusters.
-The supported APIs are available on our [API coverage page]({{< ref "coverage_opensearch" >}}), which provides information on the extent of OpenSearch's integration with LocalStack.
+The supported APIs are available on our [API coverage page](), which provides information on the extent of OpenSearch's integration with LocalStack.
The following versions of OpenSearch Service are supported by LocalStack:
@@ -42,9 +40,9 @@ To create an OpenSearch Service cluster, you can use the [`CreateDomain`](https:
OpenSearch Service domain is synonymous with an OpenSearch cluster.
Execute the following command to create a new OpenSearch domain:
-{{< command >}}
-$ awslocal opensearch create-domain --domain-name my-domain
-{{< / command >}}
+```bash
+awslocal opensearch create-domain --domain-name my-domain
+```
Each time you establish a cluster using a new version of OpenSearch, the corresponding OpenSearch binary must be downloaded, a process that might require some time to complete.
In the LocalStack log you will see something like, where you can see the cluster starting up in the background.
@@ -52,10 +50,10 @@ In the LocalStack log you will see something like, where you can see the cluster
You can open the LocalStack logs, to see that the OpenSearch Service cluster is being created in the background.
You can use the [`DescribeDomain`](https://docs.aws.amazon.com/opensearch-service/latest/APIReference/API_DescribeDomain.html) API to check the status of the cluster:
-{{< command >}}
-$ awslocal opensearch describe-domain \
+```bash
+awslocal opensearch describe-domain \
--domain-name my-domain | jq ".DomainStatus.Processing"
-{{< / command >}}
+```
The `Processing` attribute will be `false` once the cluster is up and running.
Once the cluster is up, you can interact with the cluster.
@@ -66,15 +64,15 @@ You can now interact with the cluster at the cluster API endpoint for the domain
Run the following command to get the cluster health:
-{{< command >}}
-$ curl http://my-domain.us-east-1.opensearch.localhost.localstack.cloud:4566
-{{< / command >}}
+```bash
+curl http://my-domain.us-east-1.opensearch.localhost.localstack.cloud:4566
+```
You can verify that the cluster is up and running by checking the cluster health:
-{{< command >}}
-$ curl -s http://my-domain.us-east-1.opensearch.localhost.localstack.cloud:4566/_cluster/health | jq .
-{{< / command >}}
+```bash
+curl -s http://my-domain.us-east-1.opensearch.localhost.localstack.cloud:4566/_cluster/health | jq .
+```
The following output will be visible on your terminal:
@@ -108,7 +106,7 @@ The strategy can be configured via the `OPENSEARCH_ENDPOINT_STRATEGY` environmen
| ------- | ----------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------- |
| `domain` | `...localhost.localstack.cloud:4566` | The default strategy employing the `localhost.localstack.cloud` domain for routing to localhost. |
| `path` | `localhost:4566///` | An alternative strategy useful if resolving LocalStack's localhost domain poses difficulties. |
-| `port` | `localhost:` | Directly exposes cluster(s) via ports from [the external service port range]({{< ref "external-ports" >}}). |
+| `port` | `localhost:` | Directly exposes cluster(s) via ports from [the external service port range](). |
Irrespective of the originating service for the clusters, the domain of each cluster consistently aligns with its engine type, be it OpenSearch or Elasticsearch.
Consequently, OpenSearch clusters incorporate `opensearch` within their domains (e.g., `my-domain.us-east-1.opensearch.localhost.localstack.cloud:4566`), while Elasticsearch clusters feature `es` in their domains (e.g., `my-domain.us-east-1.es.localhost.localstack.cloud:4566`).
@@ -121,16 +119,16 @@ Moreover, you can opt for custom domains, though it's important to incorporate t
Run the following command to create a new OpenSearch domain with a custom endpoint:
-{{< command >}}
-$ awslocal opensearch create-domain --domain-name my-domain \
+```bash
+awslocal opensearch create-domain --domain-name my-domain \
--domain-endpoint-options '{ "CustomEndpoint": "http://localhost:4566/my-custom-endpoint", "CustomEndpointEnabled": true }'
-{{< / command >}}
+```
After the domain processing is complete, you can access the cluster using the custom endpoint:
-{{< command >}}
-$ curl http://localhost:4566/my-custom-endpoint/_cluster/health
-{{< / command >}}
+```bash
+curl http://localhost:4566/my-custom-endpoint/_cluster/health
+```
## Re-using a single cluster instance
@@ -146,20 +144,22 @@ As a result, we advise caution when considering this approach and generally reco
OpenSearch will be organized in your state directory as follows:
-{{< command >}}
-$ tree -L 4 ./volume/state
-./volume/state
-├── opensearch
-│ └── arn:aws:es:us-east-1:000000000000:domain
-│ ├── my-cluster-1
-│ │ ├── backup
-│ │ ├── data
-│ │ └── tmp
-│ ├── my-cluster-2
-│ │ ├── backup
-│ │ ├── data
-│ │ └── tmp
-{{< /command >}}
+import { Tabs, TabItem, FileTree } from '@astrojs/starlight/components';
+
+
+- volume
+ - state
+ - opensearch
+ - arn:aws:es:us-east-1:000000000000:domain
+ - my-cluster-1
+ - backup
+ - data
+ - tmp
+ - my-cluster-2
+ - backup
+ - data
+ - tmp
+
## Advanced Security Options
@@ -208,15 +208,15 @@ Save it in a file named `opensearch_domain.json`.
To provision it, use the following `awslocal` CLI command, assuming the aforementioned CLI input has been stored in a file named `opensearch_domain.json`:
-{{< command >}}
-$ awslocal opensearch create-domain --cli-input-json file://./opensearch_domain.json
-{{< /command >}}
+```bash
+awslocal opensearch create-domain --cli-input-json file://./opensearch_domain.json
+```
Once the domain setup is complete (`Processing: false`), the cluster can only be accessed with the given master user credentials, via HTTP basic authentication:
-{{< command >}}
-$ curl -u 'admin:really-secure-passwordAa!1' http://secure-domain.us-east-1.opensearch.localhost.localstack.cloud:4566/_cluster/health
-{{< /command >}}
+```bash
+curl -u 'admin:really-secure-passwordAa!1' http://secure-domain.us-east-1.opensearch.localhost.localstack.cloud:4566/_cluster/health
+```
The following output will be visible on your terminal:
@@ -232,22 +232,23 @@ It's important to note that any unauthorized requests will yield an HTTP respons
And you can directly use the official OpenSearch Dashboards Docker image to analyze data in your OpenSearch domain within LocalStack!
When using OpenSearch Dashboards with LocalStack, you need to make sure to:
-- Enable the [advanced security options]({{< ref "#advanced-security-options" >}}) and set a username and a password.
+- Enable the [advanced security options](#advanced-security-options) and set a username and a password.
This is required by OpenSearch Dashboards.
- Ensure that the OpenSearch Dashboards Docker container uses the LocalStack DNS.
- You can find more information on how to connect your Docker container to Localstack in our [Network Troubleshooting guide]({{< ref "references/network-troubleshooting/endpoint-url/#from-your-container" >}}).
+ You can find more information on how to connect your Docker container to Localstack in our [Network Troubleshooting guide]().
First, you need to make sure to start LocalStack in a specific Docker network:
-{{< command >}}
-$ localstack start --network ls
-{{< /command >}}
+
+```bash
+localstack start --network ls
+```
Now you can provision a new OpenSearch domain.
-Make sure to enable the [advanced security options]({{< ref "#advanced-security-options" >}}):
+Make sure to enable the [advanced security options](#advanced-security-options):
-{{< command >}}
-$ awslocal opensearch create-domain --cli-input-json file://./opensearch_domain.json
-{{< /command >}}
+```bash
+awslocal opensearch create-domain --cli-input-json file://./opensearch_domain.json
+```
Now you can start another container for OpenSearch Dashboards, which is configured such that:
- The port for OpenSearch Dashboards is mapped (`5601`).
@@ -257,10 +258,9 @@ Now you can start another container for OpenSearch Dashboards, which is configur
- The OpenSearch credentials are set.
- The version of OpenSearch Dashboards is the same as the OpenSearch domain.
-{{< command >}}
+```bash
docker inspect localstack-main | \
jq -r '.[0].NetworkSettings.Networks | to_entries | .[].value.IPAddress'
-# prints 172.22.0.2
docker run --rm -p 5601:5601 \
--network ls \
@@ -268,7 +268,7 @@ docker run --rm -p 5601:5601 \
-e "OPENSEARCH_HOSTS=http://secure-domain.us-east-1.opensearch.localhost.localstack.cloud:4566" \
-e "OPENSEARCH_USERNAME=admin" -e 'OPENSEARCH_PASSWORD=really-secure-passwordAa!1' \
opensearchproject/opensearch-dashboards:2.11.0
-{{< /command >}}
+```
Once the container is running, you can reach OpenSearch Dashboards at `http://localhost:5601` and you can log in with your OpenSearch domain credentials.
@@ -329,45 +329,43 @@ volumes:
You can start the Docker Compose environment using the following command:
-{{< command >}}
-$ docker-compose up -d
-{{< /command >}}
+```bash
+docker-compose up -d
+```
You can now create an OpenSearch cluster using the `awslocal` CLI:
-{{< command >}}
-$ awslocal opensearch create-domain --domain-name my-domain
-{{< /command >}}
+```bash
+awslocal opensearch create-domain --domain-name my-domain
+```
If the `Processing` status shows as `true`, the cluster isn't fully operational yet.
You can use the `describe-domain` command to retrieve the current status:
-{{< command >}}
-$ awslocal opensearch describe-domain --domain-name my-domain
-{{< /command >}}
+```bash
+awslocal opensearch describe-domain --domain-name my-domain
+```
You can now verify cluster health and set up indices:
-{{< command >}}
-$ curl my-domain.us-east-1.opensearch.localhost.localstack.cloud:4566/_cluster/health | jq
-{{< /command >}}
+```bash
+curl my-domain.us-east-1.opensearch.localhost.localstack.cloud:4566/_cluster/health | jq
+```
The output will provide insights into the cluster's health and version information.
Finally create an example index using the following command:
-{{< command >}}
-$ curl -X PUT my-domain.us-east-1.opensearch.localhost.localstack.cloud:4566/my-index
-{{< /command >}}
+```bash
+curl -X PUT my-domain.us-east-1.opensearch.localhost.localstack.cloud:4566/my-index
+```
## Resource Browser
The LocalStack Web Application provides a Resource Browser for managing OpenSearch domains.
You can access the Resource Browser by opening the LocalStack Web Application in your browser, navigating to the **Resources** section, and then clicking on **OpenSearch Service** under the **Analytics** section.
-
-
-
+
The Resource Browser allows you to perform the following actions:
@@ -390,4 +388,4 @@ The `CustomEndpointOptions` in LocalStack offers the flexibility to utilize arbi
## Troubleshooting
If you encounter difficulties resolving subdomains while employing the `OPENSEARCH_ENDPOINT_STRATEGY=domain` (the default setting), it's advisable to investigate whether your DNS configuration might be obstructing rebind queries.
-For further insights on addressing this issue, refer to the section on [DNS rebind protection]({{< ref "dns-server#dns-rebind-protection" >}}).
+For further insights on addressing this issue, refer to the section on [DNS rebind protection](/aws/tooling/dns-server#dns-rebind-protection).
diff --git a/src/content/docs/aws/services/organizations.md b/src/content/docs/aws/services/organizations.md
index 73ab5ee2..85b906e8 100644
--- a/src/content/docs/aws/services/organizations.md
+++ b/src/content/docs/aws/services/organizations.md
@@ -1,15 +1,16 @@
---
title: "Organizations"
-linkTitle: "Organizations"
tags: ["Ultimate"]
description: Get started with AWS Organizations on LocalStack
---
+## Introduction
+
Amazon Web Services Organizations is an account management service that allows you to consolidate multiple different AWS accounts into an organization.
It allows you to manage different accounts in a single organization and consolidate billing.
With Organizations, you can also attach different policies to your organizational units (OUs) or individual accounts in your organization.
-Organizations is available over LocalStack Pro and the supported APIs are available over our [configuration page]({{< ref "configuration" >}}).
+Organizations is available over LocalStack Pro and the supported APIs are available over our [configuration page](/aws/capabilities/config/configuration).
## Getting started
@@ -18,69 +19,69 @@ This guide is intended for users who wish to get more acquainted with Organizati
To get started, start your LocalStack instance using your preferred method:
1. Create a new local AWS Organization with the feature set flag set to `ALL`:
- {{< command >}}
- $ awslocal organizations create-organization --feature-set ALL
- {{< /command >}}
+ ```bash
+ awslocal organizations create-organization --feature-set ALL
+ ```
2. You can now run the `describe-organization` command to see the details of your organization:
- {{< command >}}
- $ awslocal organizations describe-organization
- {{< /command >}}
+ ```bash
+ awslocal organizations describe-organization
+ ```
3. You can now create an AWS account that would be a member of your organization:
- {{< command >}}
- $ awslocal organizations create-account \
+ ```bash
+ awslocal organizations create-account \
--email example@example.com \
--account-name "Test Account"
- {{< /command >}}
+ ```
Since LocalStack essentially mocks AWS, the account creation is instantaneous.
You can now run the `list-accounts` command to see the details of your organization:
- {{< command >}}
- $ awslocal organizations list-accounts
- {{< /command >}}
+ ```bash
+ awslocal organizations list-accounts
+ ```
4. You can also remove a member account from your organization:
- {{< command >}}
- $ awslocal organizations remove-account-from-organization --account-id
- {{< /command >}}
+ ```bash
+ awslocal organizations remove-account-from-organization --account-id
+ ```
5. To close an account in your organization, you can run the `close-account` command:
- {{< command >}}
- $ awslocal organizations close-account --account-id 000000000000
- {{< /command >}}
+ ```bash
+ awslocal organizations close-account --account-id 000000000000
+ ```
6. You can use organizational units (OUs) to group accounts together to administer as a single unit.
To create an OU, you can run:
- {{< command >}}
- $ awslocal organizations list-roots
- $ awslocal organizations list-children \
+ ```bash
+ awslocal organizations list-roots
+ awslocal organizations list-children \
--parent-id \
--child-type ORGANIZATIONAL_UNIT
- $ awslocal organizations create-organizational-unit \
+ awslocal organizations create-organizational-unit \
--parent-id \
--name New-Child-OU
- {{< /command >}}
+ ```
7. Before you can create and attach a policy to your organization, you must enable a policy type.
To enable a policy type, you can run:
- {{< command >}}
- $ awslocal organizations enable-policy-type \
+ ```bash
+ awslocal organizations enable-policy-type \
--root-id \
--policy-type BACKUP_POLICY
- {{< /command >}}
+ ```
To disable a policy type, you can run:
- {{< command >}}
- $ awslocal organizations disable-policy-type \
+ ```bash
+ awslocal organizations disable-policy-type \
--root-id \
--policy-type BACKUP_POLICY
- {{< /command >}}
+ ```
8. To view the policies that are attached to your organization, you can run:
- {{< command >}}
- $ awslocal organizations list-policies --filter SERVICE_CONTROL_POLICY
- {{< /command >}}
+ ```bash
+ awslocal organizations list-policies --filter SERVICE_CONTROL_POLICY
+ ```
9. To delete an organization, you can run:
- {{< command >}}
- $ awslocal organizations delete-organization
- {{< /command >}}
+ ```bash
+ awslocal organizations delete-organization
+ ```
diff --git a/src/content/docs/aws/services/pca.md b/src/content/docs/aws/services/pca.md
index ec778e85..ffee90ed 100644
--- a/src/content/docs/aws/services/pca.md
+++ b/src/content/docs/aws/services/pca.md
@@ -1,6 +1,5 @@
---
-title: "Private Certificate Authority (ACM PCA)"
-linkTitle: "Private Certificate Authority (ACM PCA)"
+title: Private Certificate Authority (ACM PCA)
description: Get started with Private Certificate Authority (ACM PCA) on LocalStack
tags: ["Ultimate"]
---
@@ -12,7 +11,7 @@ ACM PCA extends ACM's certificate management capabilities to private certificate
LocalStack allows you to use the ACM PCA APIs to create, list, and delete private certificates.
You can creating, describing, tagging, and listing tags for a CA using ACM PCA.
-The supported APIs are available on our [API coverage page]({{< ref "coverage_acm-pca" >}}), which provides information on the extent of ACM PCA's integration with LocalStack.
+The supported APIs are available on our [API coverage page](), which provides information on the extent of ACM PCA's integration with LocalStack.
## Getting started
@@ -24,8 +23,8 @@ We will follow the procedure to create and install a certificate for a single-le
Start by creating a new Certificate Authority with ACM PCA using the [`CreateCertificateAuthority`](https://docs.aws.amazon.com/privateca/latest/APIReference/API_CreateCertificateAuthority.html) API.
This command sets up a new CA with specified configurations for key algorithm, signing algorithm, and subject information.
-{{< command >}}
-$ awslocal acm-pca create-certificate-authority \
+```bash
+awslocal acm-pca create-certificate-authority \
--certificate-authority-configuration '{
"KeyAlgorithm":"RSA_2048",
"SigningAlgorithm":"SHA256WITHRSA",
@@ -37,22 +36,29 @@ $ awslocal acm-pca create-certificate-authority \
}
}' \
--certificate-authority-type "ROOT"
-
+```
+
+The output will be similar to the following:
+
+```json
{
"CertificateAuthorityArn": "arn:aws:acm-pca:eu-central-1:000000000000:certificate-authority/0b20353f-ce7a-4de4-9b82-e06903a893ff"
}
-
-{{< /command >}}
+```
Note the `CertificateAuthorityArn` from the output as it will be needed for subsequent commands.
To retrieve the detailed information about the created Certificate Authority, use the [`DescribeCertificateAuthority`](https://docs.aws.amazon.com/privateca/latest/APIReference/API_DescribeCertificateAuthority.html) API.
This command returns the detailed information about the CA, including the CA's ARN, status, and configuration.
-{{< command >}}
-$ awslocal acm-pca describe-certificate-authority \
+```bash
+awslocal acm-pca describe-certificate-authority \
--certificate-authority-arn arn:aws:acm-pca:eu-central-1:000000000000:certificate-authority/0b20353f-ce7a-4de4-9b82-e06903a893ff
-
+```
+
+The output will be similar to the following:
+
+```json
{
"CertificateAuthority": {
"Arn": "arn:aws:acm-pca:eu-central-1:000000000000:certificate-authority/0b20353f-ce7a-4de4-9b82-e06903a893ff",
@@ -79,8 +85,7 @@ $ awslocal acm-pca describe-certificate-authority \
"UsageMode": "SHORT_LIVED_CERTIFICATE"
}
}
-
-{{< /command >}}
+```
Note the `PENDING_CERTIFICATE` status.
In the following steps, we will create and attach a certificate for this CA.
@@ -89,27 +94,30 @@ In the following steps, we will create and attach a certificate for this CA.
Use the [`GetCertificateAuthorityCsr`](https://docs.aws.amazon.com/privateca/latest/APIReference/API_GetCertificateAuthorityCsr.html) operation to obtain the Certificate Signing Request (CSR) for the CA.
-{{< command >}}
-$ awslocal acm-pca get-certificate-authority-csr \
+```bash
+awslocal acm-pca get-certificate-authority-csr \
--certificate-authority-arn arn:aws:acm-pca:eu-central-1:000000000000:certificate-authority/0b20353f-ce7a-4de4-9b82-e06903a893ff \
--output text | tee ca.csr
-{{< /command >}}
+```
Next, issue the certificate for the CA using this CSR.
-{{< command >}}
-$ awslocal acm-pca issue-certificate \
+```bash
+awslocal acm-pca issue-certificate \
--csr fileb://ca.csr \
--signing-algorithm SHA256WITHRSA \
--template-arn arn:aws:acm-pca:::template/RootCACertificate/V1 \
--validity Value=10,Type=YEARS \
--certificate-authority-arn arn:aws:acm-pca:eu-central-1:000000000000:certificate-authority/0b20353f-ce7a-4de4-9b82-e06903a893ff
-
+```
+
+The output will be similar to the following:
+
+```json
{
"CertificateArn": "arn:aws:acm-pca:eu-central-1:000000000000:certificate-authority/0b20353f-ce7a-4de4-9b82-e06903a893ff/certificate/17ef7bbf3cc6471ba3ef0707119b8392"
}
-
-{{< /command >}}
+```
The CA certificate is now created and its ARN is indicated by the `CertificateArn` parameter.
@@ -117,31 +125,36 @@ The CA certificate is now created and its ARN is indicated by the `CertificateAr
Finally, we retrieve the signed certificate with [`GetCertificate`](https://docs.aws.amazon.com/privateca/latest/APIReference/API_GetCertificate.html) and import it using [`ImportCertificateAuthorityCertificate`](https://docs.aws.amazon.com/privateca/latest/APIReference/API_ImportCertificateAuthorityCertificate.html).
-{{< command >}}
-$ awslocal acm-pca get-certificate \
+```bash
+awslocal acm-pca get-certificate \
--certificate-authority-arn arn:aws:acm-pca:eu-central-1:000000000000:certificate-authority/0b20353f-ce7a-4de4-9b82-e06903a893ff \
--certificate-arn arn:aws:acm-pca:eu-central-1:000000000000:certificate-authority/0b20353f-ce7a-4de4-9b82-e06903a893ff/certificate/17ef7bbf3cc6471ba3ef0707119b8392 \
--output text | tee cert.pem
-{{< /command >}}
+```
+
+The output will be similar to the following:
-{{< command >}}
-$ awslocal acm-pca import-certificate-authority-certificate \
+```bash
+awslocal acm-pca import-certificate-authority-certificate \
--certificate-authority-arn arn:aws:acm-pca:eu-central-1:000000000000:certificate-authority/0b20353f-ce7a-4de4-9b82-e06903a893ff \
--certificate fileb://cert.pem
-{{< /command >}}
+```
The CA is now ready for use.
You can verify this by checking its status:
-{{< command >}}
-$ awslocal acm-pca describe-certificate-authority \
+```bash
+awslocal acm-pca describe-certificate-authority \
--certificate-authority-arn arn:aws:acm-pca:eu-central-1:000000000000:certificate-authority/0b20353f-ce7a-4de4-9b82-e06903a893ff \
--query CertificateAuthority.Status \
--output text
-
+```
+
+The output will be:
+
+```bash
ACTIVE
-
-{{< /command >}}
+```
The CA certificate can be retrieved at a later point using [`GetCertificateAuthorityCertificate`](https://docs.aws.amazon.com/privateca/latest/APIReference/API_GetCertificateAuthorityCertificate.html).
In general, this operation returns both the certificate and the certificate chain.
@@ -154,16 +167,20 @@ With the private CA set up, you can now issue end-entity certificates.
Using [OpenSSL](https://openssl-library.org/), create a CSR and the private key:
-{{< command >}}
-$ openssl req -out local-csr.pem -new -newkey rsa:2048 -nodes -keyout local-pkey.pem
-{{< /command >}}
+```bash
+openssl req -out local-csr.pem -new -newkey rsa:2048 -nodes -keyout local-pkey.pem
+```
You may inspect the CSR using the following command.
It should resemble the illustrated output.
-{{< command >}}
-$ openssl req -in local-csr.pem -text -noout
-
+```bash
+openssl req -in local-csr.pem -text -noout
+```
+
+The output will be similar to the following:
+
+```bash
Certificate Request:
Data:
Version: 1 (0x0)
@@ -182,34 +199,41 @@ Certificate Request:
Signature Value:
3e:23:12:26:45:af:39:35:5d:d7:b4:40:fb:1a:08:c7:16:c3:
...
-
-{{< /command >}}
+```
Next, using [`IssueCertificate`](https://docs.aws.amazon.com/privateca/latest/APIReference/API_IssueCertificate.html) you can generate the end-entity certificate.
Note that there is no [certificate template](https://docs.aws.amazon.com/privateca/latest/userguide/UsingTemplates.html) specified which causes the end-entity certificate to be issued by default.
-{{< command >}}
-$ awslocal acm-pca issue-certificate \
+```bash
+awslocal acm-pca issue-certificate \
--certificate-authority-arn arn:aws:acm-pca:eu-central-1:000000000000:certificate-authority/0b20353f-ce7a-4de4-9b82-e06903a893ff \
--csr fileb://local-csr.pem \
--signing-algorithm "SHA256WITHRSA" \
--validity Value=365,Type="DAYS"
-
+```
+
+The output will be similar to the following:
+
+```json
{
"CertificateArn": "arn:aws:acm-pca:eu-central-1:000000000000:certificate-authority/0b20353f-ce7a-4de4-9b82-e06903a893ff/certificate/079d0a13daf943f6802d365dd83658c7"
}
-
-{{< /command >}}
+```
### Verify Certificates
Using OpenSSL, you can verify that the end-entity certificate was indeed signed by the CA.
In the following command, `local-cert.pem` refers to the end-entity certificate and `cert.pem` refers to the CA certificate.
-{{< command >}}
-$ openssl verify -CAfile cert.pem local-cert.pem
+```bash
+openssl verify -CAfile cert.pem local-cert.pem
+```
+
+The output will be:
+
+```bash
local-cert.pem: OK
-{{< /command >}}
+```
### Tag the Certificate Authority
@@ -217,20 +241,24 @@ Tagging resources in AWS helps in managing and identifying them.
Use the [`TagCertificateAuthority`](https://docs.aws.amazon.com/privateca/latest/APIReference/API_TagCertificateAuthority.html) API to tag the created Certificate Authority.
This command adds the specified tags to the specified CA.
-{{< command >}}
-$ awslocal acm-pca tag-certificate-authority \
+```bash
+awslocal acm-pca tag-certificate-authority \
--certificate-authority-arn arn:aws:acm-pca:us-east-1:000000000000:certificate-authority/f38ee966-bc23-40f8-8143-e981aee73600 \
--tags Key=Admin,Value=Alice
-{{< /command >}}
+```
After tagging your Certificate Authority, you may want to view these tags.
You can use the [`ListTags`](https://docs.aws.amazon.com/privateca/latest/APIReference/API_ListTags.html) API to list all the tags associated with the specified CA.
-{{< command >}}
-$ awslocal acm-pca list-tags \
+```bash
+awslocal acm-pca list-tags \
--certificate-authority-arn arn:aws:acm-pca:us-east-1:000000000000:certificate-authority/f38ee966-bc23-40f8-8143-e981aee73600 \
--max-results 10
-
+```
+
+The output will be similar to the following:
+
+```json
{
"Tags": [
{
@@ -243,5 +271,4 @@ $ awslocal acm-pca list-tags \
}
]
}
-
-{{< /command >}}
+```
diff --git a/src/content/docs/aws/services/pinpoint.md b/src/content/docs/aws/services/pinpoint.md
index d11d67f1..d8d33105 100644
--- a/src/content/docs/aws/services/pinpoint.md
+++ b/src/content/docs/aws/services/pinpoint.md
@@ -1,16 +1,15 @@
---
title: "Pinpoint"
-linkTitle: "Pinpoint"
description: Get started with Pinpoint on LocalStack
tags: ["Ultimate"]
persistence: supported
---
-{{< callout "warning" >}}
+:::danger
Amazon Pinpoint will be [retired on 30 October 2026](https://docs.aws.amazon.com/pinpoint/latest/userguide/migrate.html).
It will be removed from LocalStack soon after this date.
-{{< /callout >}}
+:::
## Introduction
@@ -18,7 +17,7 @@ Pinpoint is a customer engagement service to facilitate communication across mul
Pinpoint allows developers to create and manage customer segments based on various attributes, such as user behavior and demographics, while integrating with other AWS services to send targeted messages to customers.
LocalStack allows you to mock the Pinpoint APIs in your local environment.
-The supported APIs are available on our [API coverage page]({{< ref "coverage_pinpoint" >}}), which provides information on the extent of Pinpoint's integration with LocalStack.
+The supported APIs are available on our [API coverage page](), which provides information on the extent of Pinpoint's integration with LocalStack.
## Getting started
@@ -32,10 +31,10 @@ We will demonstrate how to create a Pinpoint application, retrieve all applicati
Create a Pinpoint application using the [`CreateApp`](https://docs.aws.amazon.com/pinpoint/latest/apireference/apps-application-id.html) API.
Execute the following command:
-{{< command >}}
-$ awslocal pinpoint create-app \
+```bash
+awslocal pinpoint create-app \
--create-application-request Name=ExampleCorp,tags={"Stack"="Test"}
-{{< /command >}}
+```
The following output would be retrieved:
@@ -55,9 +54,9 @@ The following output would be retrieved:
You can list all applications using the [`GetApps`](https://docs.aws.amazon.com/pinpoint/latest/apireference/apps.html) API.
Execute the following command:
-{{< command >}}
-$ awslocal pinpoint get-apps
-{{< /command >}}
+```bash
+awslocal pinpoint get-apps
+```
The following output would be retrieved:
@@ -81,10 +80,10 @@ The following output would be retrieved:
You can list all tags for the application using the [`GetApp`](https://docs.aws.amazon.com/pinpoint/latest/apireference/apps-application-id.html) API.
Execute the following command:
-{{< command >}}
-$ awslocal pinpoint list-tags-for-resource \
+```bash
+awslocal pinpoint list-tags-for-resource \
--resource-arn arn:aws:mobiletargeting:us-east-1:000000000000:apps/4487a55ac6fb4a2699a1b90727c978e7
-{{< /command >}}
+```
Replace the `resource-arn` with the ARN of the application you created earlier.
The following output would be retrieved:
@@ -111,8 +110,8 @@ Instead it provides alternative ways to retrieve the actual OTP code as illustra
Begin by making a OTP request:
-{{< command >}}
-$ awslocal pinpoint send-otp-message \
+```bash
+awslocal pinpoint send-otp-message \
--application-id fff5a801e01643c18a13a763e22a8fbf \
--send-otp-message-request-parameters '{
"BrandName": "LocalStack Community",
@@ -124,19 +123,27 @@ $ awslocal pinpoint send-otp-message \
"AllowedAttempts": 3,
"ValidityPeriod": 2
}'
-
+```
+
+The output will be similar to the following:
+
+```json
{
"MessageResponse": {
"ApplicationId": "fff5a801e01643c18a13a763e22a8fbf"
}
}
-
-{{< /command >}}
+```
You can use the debug endpoint `/_aws/pinpoint//` to retrieve the OTP message details:
-{{< command >}}
-$ curl http://localhost:4566/_aws/pinpoint/fff5a801e01643c18a13a763e22a8fbf/liftoffcampaign | jq .
+```bash
+curl http://localhost:4566/_aws/pinpoint/fff5a801e01643c18a13a763e22a8fbf/liftoffcampaign | jq .
+```
+
+The output will be similar to the following:
+
+```json
{
"AllowedAttempts": 3,
"BrandName": "LocalStack Community",
@@ -150,7 +157,7 @@ $ curl http://localhost:4566/_aws/pinpoint/fff5a801e01643c18a13a763e22a8fbf/lift
"CreatedTimestamp": "2024-10-17T05:38:24.070Z",
"Code": "655745"
}
-{{< /command >}}
+```
The OTP code is also printed in an `INFO` level message in the LocalStack log output:
@@ -160,22 +167,25 @@ The OTP code is also printed in an `INFO` level message in the LocalStack log ou
Finally, the OTP code can be verified using:
-{{< command >}}
-$ awslocal pinpoint verify-otp-message \
+```bash
+awslocal pinpoint verify-otp-message \
--application-id fff5a801e01643c18a13a763e22a8fbf \
--verify-otp-message-request-parameters '{
"ReferenceId": "liftoffcampaign",
"DestinationIdentity": "+1224364860",
"Otp": "655745"
}'
-
+```
+
+The output will be similar to the following:
+
+```json
{
"VerificationResponse": {
"Valid": true
}
}
-
-{{< /command >}}
+```
When validating OTP codes, LocalStack checks for the number of allowed attempts and the validity period.
Unlike AWS, there is no lower limit for validity period.
diff --git a/src/content/docs/aws/services/pipes.md b/src/content/docs/aws/services/pipes.md
index 100bb41a..73f0f945 100644
--- a/src/content/docs/aws/services/pipes.md
+++ b/src/content/docs/aws/services/pipes.md
@@ -1,6 +1,5 @@
---
title: "EventBridge Pipes"
-linkTitle: "EventBridge Pipes"
description: Get started with EventBridge Pipes on LocalStack
tags: ["Free"]
persistence: supported with limitations
@@ -16,12 +15,12 @@ In contrast, EventBridge Event Bus offers a one-to-many integration where an eve
LocalStack allows you to use the Pipes APIs in your local environment to create Pipes with SQS queues and Kinesis streams as source and target.
You can also filter events using EventBridge event patterns and enrich events using Lambda.
-The supported APIs are available on our [API coverage page]({{< ref "coverage_pipes" >}}), which provides information on the extent of Pipe's integration with LocalStack.
+The supported APIs are available on our [API coverage page](), which provides information on the extent of Pipe's integration with LocalStack.
-{{< callout >}}
+:::note
The implementation of EventBridge Pipes is currently in **preview** stage and under active development.
If you would like support for more APIs or report bugs, please make an issue on [GitHub](https://github.com/localstack/localstack/issues/new/choose).
-{{< /callout >}}
+:::
## Getting started
@@ -35,29 +34,29 @@ We will demonstrate how to create a Pipe with SQS queues as source and target, a
Create two SQS queues that will be used as source and target for the Pipe.
Run the following command to create a queue using the [`CreateQueue`](https://docs.aws.amazon.com/AWSSimpleQueueService/latest/APIReference/API_CreateQueue.html) API:
-{{< command >}}
-$ awslocal sqs create-queue --queue-name source-queue
-$ awslocal sqs create-queue --queue-name target-queue
-{{< /command >}}
+```bash
+awslocal sqs create-queue --queue-name source-queue
+awslocal sqs create-queue --queue-name target-queue
+```
You can fetch their queue ARNs using the [`GetQueueAttributes`](https://docs.aws.amazon.com/AWSSimpleQueueService/latest/APIReference/API_GetQueueAttributes.html) API:
-{{< command >}}
-$ SOURCE_QUEUE_ARN=$(awslocal sqs get-queue-attributes --queue-url http://sqs.us-east-1.localhost.localstack.cloud:4566/000000000000/source-queue --attribute-names QueueArn --output text)
-$ TARGET_QUEUE_ARN=$(awslocal sqs get-queue-attributes --queue-url http://sqs.us-east-1.localhost.localstack.cloud:4566/000000000000/target-queue --attribute-names QueueArn --output text)
-{{< /command >}}
+```bash
+SOURCE_QUEUE_ARN=$(awslocal sqs get-queue-attributes --queue-url http://sqs.us-east-1.localhost.localstack.cloud:4566/000000000000/source-queue --attribute-names QueueArn --output text)
+TARGET_QUEUE_ARN=$(awslocal sqs get-queue-attributes --queue-url http://sqs.us-east-1.localhost.localstack.cloud:4566/000000000000/target-queue --attribute-names QueueArn --output text)
+```
### Create a Pipe
You can now create a Pipe, using the [`CreatePipe`](https://docs.aws.amazon.com/eventbridge/latest/APIReference/API_CreatePipe.html) API.
Run the following command, by specifying the source and target queue ARNs we created earlier:
-{{< command >}}
-$ awslocal pipes create-pipe --name sample-pipe \
+```bash
+awslocal pipes create-pipe --name sample-pipe \
--source $SOURCE_QUEUE_ARN \
--target $TARGET_QUEUE_ARN \
--role-arn arn:aws:iam::000000000000:role/pipes-role
-{{< /command >}}
+```
The following output would be retrieved:
@@ -76,9 +75,9 @@ The following output would be retrieved:
You can use the [`DescribePipe`](https://docs.aws.amazon.com/eventbridge/latest/APIReference/API_DescribePipe.html) API to get information about the Pipe:
-{{< command >}}
-$ awslocal pipes describe-pipe --name sample-pipe
-{{< /command >}}
+```bash
+awslocal pipes describe-pipe --name sample-pipe
+```
The following output would be retrieved:
@@ -110,29 +109,27 @@ The following output would be retrieved:
You can now send events to the source queue, which will be routed to the target queue.
Run the following command to send an event to the source queue:
-{{< command >}}
-$ awslocal sqs send-message \
+```bash
+awslocal sqs send-message \
--queue-url http://sqs.us-east-1.localhost.localstack.cloud:4566/000000000000/source-queue \
--message-body "message-1"
-{{< /command >}}
+```
### Receive events from the target queue
You can fetch the message from the target queue using the [`ReceiveMessage`](https://docs.aws.amazon.com/AWSSimpleQueueService/latest/APIReference/API_ReceiveMessage.html) API:
-{{< command >}}
-$ awslocal sqs receive-message \
+```bash
+awslocal sqs receive-message \
--queue-url http://sqs.us-east-1.localhost.localstack.cloud:4566/000000000000/target-queue
-{{< /command >}}
+```
## Resource Browser
The LocalStack Web Application provides a Resource Browser for managing EventBridge Pipes.
You can access the Resource Browser by opening the LocalStack Web Application in your browser, navigating to the **Resource Browser** section, and then clicking on **EventBridge Pipes** under the **App Integration** section.
-
-
-
+
The Resource Browser for EventBridge Pipes in LocalStack allows you to perform the following actions:
diff --git a/src/content/docs/aws/services/qldb.md b/src/content/docs/aws/services/qldb.md
index 9401e79a..d52dde13 100644
--- a/src/content/docs/aws/services/qldb.md
+++ b/src/content/docs/aws/services/qldb.md
@@ -5,10 +5,10 @@ tags: ["Ultimate"]
description: Get started with Quantum Ledger Database (QLDB) on LocalStack
---
-{{< callout "warning" >}}
+:::danger
Amazon QLDB will be [retired on 31 July 2025](https://docs.aws.amazon.com/qldb/latest/developerguide/what-is.html).
It will be removed from LocalStack soon after this date.
-{{< /callout >}}
+:::
## Introduction
@@ -22,7 +22,7 @@ and scalable
way to maintain a complete and verifiable history of data changes over time.
LocalStack allows you to use the QLDB APIs in your local environment to create and manage ledgers.
-The supported APIs are available on the [API coverage page]({{< ref "/references/coverage/coverage_qldb/index.md" >}} "QLDB service coverage page"), which provides information on the extent of QLDB's integration with LocalStack.
+The supported APIs are available on the [API coverage page](), which provides information on the extent of QLDB's integration with LocalStack.
## Getting started
@@ -54,9 +54,11 @@ the [Releases](https://github.com/awslabs/amazon-qldb-shell/releases) section of
QLDB provides ledger databases, which are centralized, immutable, and cryptographically verifiable
journals of transactions.
-{{< command >}}
-$ awslocal qldb create-ledger --name vehicle-registration --permissions-mode ALLOW_ALL
-{{< / command >}}
+```bash
+awslocal qldb create-ledger --name vehicle-registration --permissions-mode ALLOW_ALL
+```
+
+The output will be similar to the following:
```bash
{
@@ -69,7 +71,7 @@ $ awslocal qldb create-ledger --name vehicle-registration --permissions-mode ALL
}
```
-{{< callout >}}
+:::note
- Permissions mode – the following options are available in AWS:
@@ -89,13 +91,13 @@ To allow PartiQL
commands, you must create IAM permissions policies for specific table resources and PartiQL actions,
in addition to
the `SendCommand` API permission for the ledger.
-{{< /callout >}}
+:::
The following command can be used directly to write PartiQL statements against a QLDB ledger:
-{{< command >}}
-$ qldb --qldb-session-endpoint http://localhost:4566 --ledger vehicle-registration
-{{< / command >}}
+```bash
+qldb --qldb-session-endpoint http://localhost:4566 --ledger vehicle-registration
+```
The user can continue from here to create tables, populate and interrogate them.
@@ -104,9 +106,11 @@ The user can continue from here to create tables, populate and interrogate them.
PartiQL is a query language designed for processing structured data, allowing you to perform
various data manipulation tasks using familiar SQL-like syntax.
-{{< command >}}
+```bash
qldb> CREATE TABLE VehicleRegistration
-{{< / command >}}
+```
+
+The output will be:
```bash
{
@@ -131,7 +135,7 @@ qldb> CREATE TABLE VehicleRegistration
The `VehicleRegistration` table was created.
Now it's time to add some items:
-{{< command >}}
+```bash
qldb> INSERT INTO VehicleRegistration VALUE
{
'VIN' : 'KM8SRDHF6EU074761',
@@ -149,7 +153,9 @@ qldb> INSERT INTO VehicleRegistration VALUE
'ValidFromDate' : `2017-09-14T`,
'ValidToDate' : `2020-06-25T`
}
-{{< / command >}}
+```
+
+The output will be:
```bash
{
@@ -162,9 +168,11 @@ documentId: "3TYR9BamzyqHWBjYOfHegE"
The table can be interrogated based on the inserted registration number:
-{{< command >}}
+```bash
qldb> SELECT * FROM VehicleRegistration WHERE RegNum=1722
-{{< / command >}}
+```
+
+The output will be:
```bash
{
@@ -193,9 +201,10 @@ queries.
Supposed the vehicle is sold and changes owners, this information needs to be updated with a new
person ID.
-{{< command >}}
+```bash
qldb> UPDATE VehicleRegistration AS r SET r.Owners.PrimaryOwner.PersonId = '112233445566NO' WHERE r.VIN = 'KM8SRDHF6EU074761'
-{{< / command >}}
+```
+
The command will return the updated document ID.
```bash
@@ -206,9 +215,12 @@ The command will return the updated document ID.
```
The next step is to check on the updates made to the `PersonId` field of the `PrimaryOwner`:
-{{< command >}}
+
+```bash
qldb> SELECT r.Owners FROM VehicleRegistration AS r WHERE r.VIN = 'KM8SRDHF6EU074761'
-{{< / command >}}
+```
+
+The output will be:
```bash
{
@@ -236,9 +248,11 @@ You can see all revisions of a document that you inserted, updated, and deleted
built-in History function.
First the unique `id` of the document must be found.
-{{< command >}}
+```bash
qldb> SELECT r_id FROM VehicleRegistration AS r BY r_id WHERE r.VIN = 'KM8SRDHF6EU074761'
-{{< / command >}}
+```
+
+The output will be:
```bash
{
@@ -250,9 +264,11 @@ r_id: "3TYR9BamzyqHWBjYOfHegE"
Then, the `id` is used to query the history function.
-{{< command >}}
+```bash
qldb> SELECT h.data.VIN, h.data.City, h.data.Owners FROM history(VehicleRegistration) AS h WHERE h.metadata.id = '3TYR9BamzyqHWBjYOfHegE'
-{{< / command >}}
+```
+
+The output will be:
```bash
{
@@ -298,9 +314,11 @@ Unused ledgers can be deleted.
You'll notice that directly running the following command will lead
to an error message.
-{{< command >}}
-$ awslocal qldb delete-ledger --name vehicle-registration
-{{< / command >}}
+```bash
+awslocal qldb delete-ledger --name vehicle-registration
+```
+
+The output will be:
```bash
An error occurred (ResourcePreconditionNotMetException) when calling the DeleteLedger operation: Preventing deletion
@@ -309,9 +327,11 @@ of ledger vehicle-registration with DeletionProtection enabled
This can be adjusted using the `update-ledger` command in the AWS CLI to remove the deletion protection of the ledger:
-{{< command >}}
-$ awslocal qldb update-ledger --name vehicle-registration --no-deletion-protection
-{{< / command >}}
+```bash
+awslocal qldb update-ledger --name vehicle-registration --no-deletion-protection
+```
+
+The output will be:
```bash
{
@@ -330,9 +350,7 @@ Now the `delete-ledger` command can be repeated without errors.
The LocalStack Web Application provides a Resource Browser for managing QLDB ledgers.
You can access the Resource Browser by opening the LocalStack Web Application in your browser, navigating to the **Resources** section, and then clicking on **QLDB** under the **Database** section.
-
-
-
+
The Resource Browser allows you to perform the following actions:
diff --git a/src/content/docs/aws/services/ram.md b/src/content/docs/aws/services/ram.md
index 4d1db6dd..4978ac67 100644
--- a/src/content/docs/aws/services/ram.md
+++ b/src/content/docs/aws/services/ram.md
@@ -1,6 +1,5 @@
---
title: "Resource Access Manager (RAM)"
-linkTitle: "Resource Access Manager (RAM)"
description: Get started with RAM on LocalStack
tags: ["Ultimate"]
---
@@ -9,7 +8,7 @@ tags: ["Ultimate"]
Resource Access Manager (RAM) helps resources to be shared across AWS accounts, within or across organizations.
On AWS, RAM is an abstraction on top of AWS Identity and Access Management (IAM) which can manage resource-based policies to supported resource types.
-The API operations supported by LocalStack can be found on the [API coverage page]({{< ref "coverage_ram" >}}).
+The API operations supported by LocalStack can be found on the [API coverage page](), which provides information on the extent of RAM's integration with LocalStack.
## Getting started
@@ -18,21 +17,21 @@ This section will illustrate how to create permissions and resource shares using
### Create a permission
-{{< command >}}
-$ awslocal ram create-permission \
+```bash
+awslocal ram create-permission \
--name example \
--resource-type appsync:apis \
--policy-template '{"Effect": "Allow", "Action": "appsync:SourceGraphQL"}'
-{{< /command >}}
+```
### Create a resource share
-{{< command >}}
-$ awslocal ram create-resource-share \
+```bash
+awslocal ram create-resource-share \
--name example-resource-share \
--principals arn:aws:organizations::000000000000:organization/o-truopwybwi \
--resource-arn arn:aws:appsync:eu-central-1:000000000000:apis/wcgmjril5wuyvhmpildatuaat3
-{{< /command >}}
+```
## Current Limitations
diff --git a/src/content/docs/aws/services/rds.md b/src/content/docs/aws/services/rds.md
index 75ee825d..16d2fbfc 100644
--- a/src/content/docs/aws/services/rds.md
+++ b/src/content/docs/aws/services/rds.md
@@ -1,6 +1,5 @@
---
title: "Relational Database Service (RDS)"
-linkTitle: "Relational Database Service (RDS)"
description: Get started with Relational Database Service (RDS) on LocalStack
tags: ["Base"]
persistence: supported with limitations
@@ -13,15 +12,15 @@ RDS allows you to deploy and manage various relational database engines like MyS
RDS handles routine database tasks such as provisioning, patching, backup, recovery, and scaling.
LocalStack allows you to use the RDS APIs in your local environment to create and manage RDS clusters and instances for testing & integration purposes.
-The supported APIs are available on our [API coverage page]({{< ref "coverage_rds" >}}), which provides information on the extent of RDS's integration with LocalStack.
+The supported APIs are available on our [API coverage page](), which provides information on the extent of RDS's integration with LocalStack.
-{{< callout >}}
+:::note
We’ve introduced a new native RDS provider in LocalStack and made it the default.
This replaces Moto-based CRUD operations with a more reliable setup.
RDS state created in version 4.3 or earlier using Cloud Pods or standard persistence will not be compatible with the new provider introduced in version 4.4.
Recreating the RDS state is recommended for compatibility.
-{{< /callout >}}
+:::
## Getting started
@@ -42,14 +41,14 @@ To create an RDS cluster, you can use the [`CreateDBCluster`](https://docs.aws.a
The following command creates a new cluster with the name `db1` and the engine `aurora-postgresql`.
Instances for the cluster must be added manually.
-{{< command >}}
-$ awslocal rds create-db-cluster \
+```bash
+awslocal rds create-db-cluster \
--db-cluster-identifier db1 \
--engine aurora-postgresql \
--database-name test \
--master-username myuser \
--master-user-password mypassword
-{{< / command >}}
+```
You should see the following output:
@@ -67,13 +66,13 @@ You should see the following output:
To add an instance you can run the following command:
-{{< command >}}
-$ awslocal rds create-db-instance \
+```bash
+awslocal rds create-db-instance \
--db-instance-identifier db1-instance \
--db-cluster-identifier db1 \
--engine aurora-postgresql \
--db-instance-class db.t3.large
-{{< / command >}}
+```
### Create a SecretsManager secret
@@ -81,8 +80,8 @@ To create a `SecretsManager` secret, you can use the [`CreateSecret`](https://do
Before creating the secret, you need to create a JSON file containing the credentials for the database.
The following command creates a file called `mycreds.json` with the credentials for the database.
-{{< command >}}
-$ cat << 'EOF' > mycreds.json
+```bash
+cat << 'EOF' > mycreds.json
{
"engine": "aurora-postgresql",
"username": "myuser",
@@ -92,15 +91,15 @@ $ cat << 'EOF' > mycreds.json
"port": "4510"
}
EOF
-{{< / command >}}
+```
Run the following command to create the secret:
-{{< command >}}
-$ awslocal secretsmanager create-secret \
+```bash
+awslocal secretsmanager create-secret \
--name dbpass \
--secret-string file://mycreds.json
-{{< / command >}}
+```
You should see the following output:
@@ -121,13 +120,13 @@ Make sure to replace the `secret-arn` with the ARN from the secret you just crea
The following command executes a query against the database.
The query returns the value `123`.
-{{< command >}}
-$ awslocal rds-data execute-statement \
+```bash
+awslocal rds-data execute-statement \
--database test \
--resource-arn arn:aws:rds:us-east-1:000000000000:cluster:db1 \
--secret-arn arn:aws:secretsmanager:us-east-1:000000000000:secret:dbpass-cfnAX \
--include-result-metadata --sql 'SELECT 123'
-{{< / command >}}
+```
You should see the following output:
@@ -165,9 +164,9 @@ You should see the following output:
Alternative clients, such as `psql`, can also be employed to interact with the database.
You can retrieve the hostname and port of your created instance either from the preceding output or by using the [`DescribeDbInstances`](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_DescribeDBInstances.html) API.
-{{< command >}}
-$ psql -d test -U test -p 4513 -h localhost -W
-{{< / command >}}
+```bash
+psql -d test -U test -p 4513 -h localhost -W
+```
## Supported DB engines
@@ -185,10 +184,10 @@ It's important to note that the selection of minor versions is not available.
The latest major version will be installed within the Docker environment.
If you wish to prevent the installation of customized versions, adjusting the `RDS_PG_CUSTOM_VERSIONS` environment variable to `0` will enforce the use of the default PostgreSQL version 17.
-{{< callout >}}
+:::note
While the [`DescribeDbCluster`](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_DescribeDBClusters.html) and [`DescribeDbInstances`](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_DescribeDBInstances.html) APIs will still reflect the initially defined `engine-version`, the actual installed PostgreSQL engine might differ.
This can have implications, particularly when employing a Terraform configuration, where unexpected changes should be avoided.
-{{< /callout >}}
+:::
Instances and clusters with the PostgreSQL engine have the capability to both create and restore snapshots.
@@ -205,10 +204,10 @@ A MySQL community server will be launched in a new Docker container upon request
The `engine-version` will serve as the tag for the Docker image, allowing you to freely select the desired MySQL version from those available on the [official MySQL Docker Hub](https://hub.docker.com/_/mysql).
If you have a specific image in mind, you can also use the environment variable `MYSQL_IMAGE=`.
-{{< callout >}}
+:::note
The `arm64` MySQL images are limited to newer versions.
For more information about availability, check the [MySQL Docker Hub repository](https://hub.docker.com/_/mysql).
-{{< /callout >}}
+:::
It's essential to understand that the `MasterUserPassword` you define for the database cluster/instance will be used as the `MYSQL_ROOT_PASSWORD` environment variable for the `root` user within the MySQL container.
The user specified in `MasterUserName` will use the same password and will have complete access to the database.
@@ -255,11 +254,11 @@ In this example, you will be able to verify the IAM authentication process for R
The following command creates a new database instance with the name `mydb` and the engine `postgres`.
The database will be created with a single instance, which will be used as the master instance.
-{{< command >}}
-$ MASTER_USER=hello
-$ MASTER_PW='MyPassw0rd!'
-$ DB_NAME=test
-$ awslocal rds create-db-instance \
+```bash
+MASTER_USER=hello
+MASTER_PW='MyPassw0rd!'
+DB_NAME=test
+awslocal rds create-db-instance \
--master-username $MASTER_USER \
--master-user-password $MASTER_PW \
--db-instance-identifier mydb \
@@ -267,38 +266,38 @@ $ awslocal rds create-db-instance \
--db-name $DB_NAME \
--enable-iam-database-authentication \
--db-instance-class db.t3.small
-{{< / command >}}
+```
### Connect to the database
You can retrieve the hostname and port of your created instance either from the preceding output or by using the [`DescribeDbInstances`](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_DescribeDBInstances.html) API.
Run the following command to retrieve the host and port of the instance:
-{{< command >}}
-$ PORT=$(awslocal rds describe-db-instances --db-instance-identifier mydb | jq -r ".DBInstances[0].Endpoint.Port")
-$ HOST=$(awslocal rds describe-db-instances --db-instance-identifier mydb | jq -r ".DBInstances[0].Endpoint.Address")
-{{< / command >}}
+```bash
+PORT=$(awslocal rds describe-db-instances --db-instance-identifier mydb | jq -r ".DBInstances[0].Endpoint.Port")
+HOST=$(awslocal rds describe-db-instances --db-instance-identifier mydb | jq -r ".DBInstances[0].Endpoint.Address")
+```
Next, you can connect to the database using the master username and password:
-{{< command >}}
-$ PGPASSWORD=$MASTER_PW psql -d $DB_NAME -U $MASTER_USER -p $PORT -h $HOST -w -c 'CREATE USER myiam WITH LOGIN'
-$ PGPASSWORD=$MASTER_PW psql -d $DB_NAME -U $MASTER_USER -p $PORT -h $HOST -w -c 'GRANT rds_iam TO myiam'
-{{< / command >}}
+```bash
+PGPASSWORD=$MASTER_PW psql -d $DB_NAME -U $MASTER_USER -p $PORT -h $HOST -w -c 'CREATE USER myiam WITH LOGIN'
+PGPASSWORD=$MASTER_PW psql -d $DB_NAME -U $MASTER_USER -p $PORT -h $HOST -w -c 'GRANT rds_iam TO myiam'
+```
### Create a token
You can create a token for the user you generated using the [`generate-db-auth-token`](https://docs.aws.amazon.com/cli/latest/reference/rds/generate-db-auth-token.html) command:
-{{< command >}}
-$ TOKEN=$(awslocal rds generate-db-auth-token --username myiam --hostname $HOST --port $PORT)
-{{< / command >}}
+```bash
+TOKEN=$(awslocal rds generate-db-auth-token --username myiam --hostname $HOST --port $PORT)
+```
You can now connect to the database utilizing the user you generated and the token obtained in the previous step as the password:
-{{< command >}}
-$ PGPASSWORD=$TOKEN psql -d $DB_NAME -U myiam -w -p $PORT -h $HOST
-{{< / command >}}
+```bash
+PGPASSWORD=$TOKEN psql -d $DB_NAME -U myiam -w -p $PORT -h $HOST
+```
## Global Database Support
@@ -369,9 +368,7 @@ In addition to the `aws_*` extensions described in the sections above, LocalStac
The LocalStack Web Application provides a Resource Browser for managing RDS instances and clusters.
You can access the Resource Browser by opening the LocalStack Web Application in your browser, navigating to the **Resources** section, and then clicking on **RDS** under the **Database** section.
-
-
-
+
The Resource Browser allows you to perform the following actions:
diff --git a/src/content/docs/aws/services/redshift.md b/src/content/docs/aws/services/redshift.md
index 507b9e8c..5cd19eda 100644
--- a/src/content/docs/aws/services/redshift.md
+++ b/src/content/docs/aws/services/redshift.md
@@ -1,6 +1,5 @@
---
title: "Redshift"
-linkTitle: "Redshift"
description: Get started with Redshift on LocalStack
tags: ["Free", "Ultimate"]
---
@@ -12,12 +11,12 @@ RedShift is fully managed by AWS and serves as a petabyte-scale service which al
The query results can be saved to an S3 Data Lake while additional analytics can be provided by Athena or SageMaker.
LocalStack allows you to use the RedShift APIs in your local environment to analyze structured and semi-structured data across local data warehouses and data lakes.
-The supported APIs are available on our [API coverage page]({{< ref "coverage_redshift" >}}), which provides information on the extent of RedShift's integration with LocalStack.
+The supported APIs are available on our [API coverage page](), which provides information on the extent of RedShift's integration with LocalStack.
-{{< callout "Note" >}}
+:::note
Users on Free plan can use RedShift APIs in LocalStack for basic mocking and testing.
For advanced features like Redshift Data API and other emulation capabilities, please refer to the Ultimate plan.
-{{< /callout >}}
+:::
## Getting started
@@ -51,108 +50,106 @@ You will also create a Glue database, connection, and crawler to populate the Gl
You can create a RedShift cluster using the [`CreateCluster`](https://docs.aws.amazon.com/redshift/latest/APIReference/API_CreateCluster.html) API.
The following command will create a RedShift cluster with the variables defined above:
-{{< command >}}
-$ awslocal redshift create-cluster \
+```bash
+awslocal redshift create-cluster \
--cluster-identifier $REDSHIFT_CLUSTER_IDENTIFIER \
--db-name $REDSHIFT_DATABASE_NAME \
--master-username $REDSHIFT_USERNAME \
--master-user-password $REDSHIFT_PASSWORD \
--node-type n1
-{{< / command >}}
+```
You can fetch the status of the cluster using the [`DescribeClusters`](https://docs.aws.amazon.com/redshift/latest/APIReference/API_DescribeClusters.html) API.
Run the following command to extract the URL of the cluster:
-{{< command >}}
-$ REDSHIFT_URL=$(awslocal redshift describe-clusters \
+```bash
+REDSHIFT_URL=$(awslocal redshift describe-clusters \
--cluster-identifier $REDSHIFT_CLUSTER_IDENTIFIER | jq -r '(.Clusters[0].Endpoint.Address) + ":" + (.Clusters[0].Endpoint.Port|tostring)')
-{{< / command >}}
+```
### Create a Glue database, connection, and crawler
You can create a Glue database using the [`CreateDatabase`](https://docs.aws.amazon.com/glue/latest/webapi/API_CreateDatabase.html) API.
The following command will create a Glue database:
-{{< command >}}
-$ awslocal glue create-database \
+```bash
+awslocal glue create-database \
--database-input "{\"Name\": \"$GLUE_DATABASE_NAME\"}"
-{{< / command >}}
+```
You can create a connection to the RedShift cluster using the [`CreateConnection`](https://docs.aws.amazon.com/glue/latest/webapi/API_CreateConnection.html) API.
The following command will create a Glue connection with the RedShift cluster:
-{{< command >}}
-$ awslocal glue create-connection \
+```bash
+awslocal glue create-connection \
--connection-input "{\"Name\":\"$GLUE_CONNECTION_NAME\", \"ConnectionType\": \"JDBC\", \"ConnectionProperties\": {\"USERNAME\": \"$REDSHIFT_USERNAME\", \"PASSWORD\": \"$REDSHIFT_PASSWORD\", \"JDBC_CONNECTION_URL\": \"jdbc:redshift://$REDSHIFT_URL/$REDSHIFT_DATABASE_NAME\"}}"
-{{< / command >}}
+```
Finally, you can create a Glue crawler using the [`CreateCrawler`](https://docs.aws.amazon.com/glue/latest/webapi/API_CreateCrawler.html) API.
The following command will create a Glue crawler:
-{{< command >}}
-$ awslocal glue create-crawler \
+```bash
+awslocal glue create-crawler \
--name $GLUE_CRAWLER_NAME \
--database-name $GLUE_DATABASE_NAME \
--targets "{\"JdbcTargets\": [{\"ConnectionName\": \"$GLUE_CONNECTION_NAME\", \"Path\": \"$REDSHIFT_DATABASE_NAME/%/$REDSHIFT_TABLE_NAME\"}]}" \
--role r1
-{{< / command >}}
+```
### Create table in RedShift
You can create a table in RedShift using the [`CreateTable`](https://docs.aws.amazon.com/redshift/latest/dg/r_CREATE_TABLE_NEW.html) API.
The following command will create a table in RedShift:
-{{< command >}}
-$ REDSHIFT_STATEMENT_ID=$(awslocal redshift-data execute-statement \
+```bash
+REDSHIFT_STATEMENT_ID=$(awslocal redshift-data execute-statement \
--cluster-identifier $REDSHIFT_CLUSTER_IDENTIFIER \
--database $REDSHIFT_DATABASE_NAME \
--sql \
"create table $REDSHIFT_TABLE_NAME(salesid integer not null, listid integer not null, sellerid integer not null, buyerid integer not null, eventid integer not null, dateid smallint not null, qtysold smallint not null, pricepaid decimal(8,2), commission decimal(8,2), saletime timestamp)" | jq -r .Id)
-{{< / command >}}
+```
You can check the status of the statement using the [`DescribeStatement`](https://docs.aws.amazon.com/redshift-data/latest/APIReference/API_DescribeStatement.html) API.
The following command will check the status of the statement:
-{{< command >}}
-$ wait "awslocal redshift-data describe-statement \
+```bash
+wait "awslocal redshift-data describe-statement \
--id $REDSHIFT_STATEMENT_ID" ".Status" "FINISHED"
-{{< / command >}}
+```
### Run the crawler
You can run the crawler using the [`StartCrawler`](https://docs.aws.amazon.com/glue/latest/webapi/API_StartCrawler.html) API.
The following command will run the crawler:
-{{< command >}}
-$ awslocal glue start-crawler \
+```bash
+awslocal glue start-crawler \
--name $GLUE_CRAWLER_NAME
-{{< / command >}}
+```
You can wait for the crawler to finish using the [`GetCrawler`](https://docs.aws.amazon.com/glue/latest/webapi/API_GetCrawler.html) API.
The following command will wait for the crawler to finish:
-{{< command >}}
-$ wait "awslocal glue get-crawler \
+```bash
+wait "awslocal glue get-crawler \
--name $GLUE_CRAWLER_NAME" ".Crawler.State" "READY"
-{{< / command >}}
+```
You can finally retrieve the schema of the table using the [`GetTable`](https://docs.aws.amazon.com/glue/latest/webapi/API_GetTable.html) API.
The following command will retrieve the schema of the table:
-{{< command >}}
-$ awslocal glue get-table \
+```bash
+awslocal glue get-table \
--database-name $GLUE_DATABASE_NAME \
--name "${REDSHIFT_DATABASE_NAME}_${REDSHIFT_SCHEMA_NAME}_${REDSHIFT_TABLE_NAME}"
-{{< / command >}}
+```
## Resource Browser
The LocalStack Web Application provides a Resource Browser for managing RedShift clusters.
You can access the Resource Browser by opening the LocalStack Web Application in your browser, navigating to the **Resources** section, and then clicking on **RedShift** under the **Analytics** section.
-
-
-
+
The Resource Browser allows you to perform the following actions:
diff --git a/src/content/docs/aws/services/resourcegroups.md b/src/content/docs/aws/services/resourcegroups.md
index b3729229..58649d89 100644
--- a/src/content/docs/aws/services/resourcegroups.md
+++ b/src/content/docs/aws/services/resourcegroups.md
@@ -1,6 +1,5 @@
---
title: "Resource Groups"
-linkTitle: "Resource Groups"
tags: ["Free"]
description: >
Get started with Resource Groups on LocalStack
@@ -14,7 +13,7 @@ Resource Groups in AWS provide two types of queries that developers can use to b
With Tag-based queries, developers can organize resources based on common attributes or characteristics, while CloudFormation stack-based queries allow developers to group resources that are deployed together as part of a CloudFormation stack.
LocalStack allows you to use the Resource Groups APIs in your local environment to group and categorize resources based on criteria such as tags, resource types, regions, or custom attributes.
-The supported APIs are available on our [API coverage page]({{< ref "coverage_resource-groups" >}}), which provides information on the extent of Resource Group's integration with LocalStack.
+The supported APIs are available on our [API coverage page](), which provides information on the extent of Resource Group's integration with LocalStack.
## Getting Started
@@ -34,11 +33,11 @@ A tag-based group is created based on a query of type `TAG_FILTERS_1_0`.
Use the [`CreateGroup`](https://docs.aws.amazon.com/resource-groups/latest/APIReference/API_CreateGroup.html) API to create a Resource Group.
Run the following command to create a Resource Group named `my-resource-group`:
-{{< command >}}
-$ awslocal resource-groups create-group \
+```bash
+awslocal resource-groups create-group \
--name my-resource-group \
--resource-query '{"Type":"TAG_FILTERS_1_0","Query":"{\"ResourceTypeFilters\":[\"AWS::EC2::Instance\"],\"TagFilters\":[{\"Key\":\"Stage\",\"Values\":[\"Test\"]}]}"}'
-{{< /command >}}
+```
You can also specify `AWS::AllSupported` as the `ResourceTypeFilters` value to include all supported resource types in the group.
@@ -47,27 +46,27 @@ You can also specify `AWS::AllSupported` as the `ResourceTypeFilters` value to i
To update a Resource Group, use the [`UpdateGroup`](https://docs.aws.amazon.com/resource-groups/latest/APIReference/API_UpdateGroup.html) API.
Execute the following command to update the Resource Group `my-resource-group`:
-{{< command >}}
+```bash
awslocal resource-groups update-group \
--group-name my-resource-group \
--description "EC2 S3 buckets and RDS DBs that we are using for the test stage"
-{{< /command >}}
+```
Furthermore, you can also update the query and tags associated with a Resource Group using the [`UpdateGroup`](https://docs.aws.amazon.com/resource-groups/latest/APIReference/API_UpdateGroup.html) API.
Run the following command to update the query and tags of the Resource Group `my-resource-group`:
-{{< command >}}
+```bash
awslocal resource-groups update-group-query \
--group-name my-resource-group \
--resource-query '{"Type":"TAG_FILTERS_1_0","Query":"{\"ResourceTypeFilters\":[\"AWS::EC2::Instance\",\"AWS::S3::Bucket\",\"AWS::RDS::DBInstance\"],\"TagFilters\":[{\"Key\":\"Stage\",\"Values\":[\"Test\"]}]}"}'
-{{< /command >}}
+```
### Delete a Resource Group
To delete a Resource Group, use the [`DeleteGroup`](https://docs.aws.amazon.com/resource-groups/latest/APIReference/API_DeleteGroup.html) API.
Run the following command to delete the Resource Group `my-resource-group`:
-{{< command >}}
-$ awslocal resource-groups delete-group \
+```bash
+awslocal resource-groups delete-group \
--group-name my-resource-group
-{{< /command >}}
+```
diff --git a/src/content/docs/aws/services/route53.md b/src/content/docs/aws/services/route53.md
index 8496ab94..454ebc37 100644
--- a/src/content/docs/aws/services/route53.md
+++ b/src/content/docs/aws/services/route53.md
@@ -1,6 +1,5 @@
---
title: "Route 53"
-linkTitle: "Route 53"
description: Get started with Route 53 on LocalStack
persistence: supported
tags: ["Free"]
@@ -14,14 +13,14 @@ In addition to basic DNS functionality, Route 53 offers advanced features like h
Route 53 integrates seamlessly with other AWS services, such as route traffic to CloudFront distributions, S3 buckets configured for static website hosting, EC2 instances, and more.
LocalStack allows you to use the Route53 APIs in your local environment to create hosted zones and to manage DNS entries.
-The supported APIs are available on our [API coverage page]({{< ref "coverage_route53" >}}), which provides information on the extent of Route53's integration with LocalStack.
+The supported APIs are available on our [API coverage page](), which provides information on the extent of Route53's integration with LocalStack.
LocalStack also integrates with its DNS server to respond to DNS queries with these domains.
-{{< callout "note">}}
+:::note
LocalStack CLI does not publish port `53` anymore by default.
Use the CLI flag `--host-dns` to expose the port on the host.
This would be required if you want to reach out to Route53 domain names from your host machine, using the LocalStack DNS server.
-{{< /callout >}}
+:::
## Getting started
@@ -35,12 +34,12 @@ We will demonstrate how to create a hosted zone and query the DNS record with th
You can created a hosted zone for `example.com` using the [`CreateHostedZone`](https://docs.aws.amazon.com/Route53/latest/APIReference/API_CreateHostedZone.html) API.
Run the following command:
-{{< command >}}
-$ zone_id=$(awslocal route53 create-hosted-zone \
+```bash
+zone_id=$(awslocal route53 create-hosted-zone \
--name example.com \
--caller-reference r1 | jq -r '.HostedZone.Id')
-$ echo $zone_id
-{{< / command >}}
+echo $zone_id
+```
The following output would be retrieved:
@@ -53,11 +52,11 @@ The following output would be retrieved:
You can now change the resource record sets for the hosted zone `example.com` using the [`ChangeResourceRecordSets`](https://docs.aws.amazon.com/Route53/latest/APIReference/API_ChangeResourceRecordSets.html) API.
Run the following command:
-{{< command >}}
-$ awslocal route53 change-resource-record-sets \
+```bash
+awslocal route53 change-resource-record-sets \
--hosted-zone-id $zone_id \
--change-batch 'Changes=[{Action=CREATE,ResourceRecordSet={Name=test.example.com,Type=A,ResourceRecords=[{Value=1.2.3.4}]}}]'
-{{< / command >}}
+```
The following output would be retrieved:
@@ -73,19 +72,19 @@ The following output would be retrieved:
## DNS resolution
-LocalStack Pro supports the ability to respond to DNS queries for your Route53 domain names, with our [integrated DNS server]({{< ref "user-guide/tools/dns-server" >}}).
+LocalStack Pro supports the ability to respond to DNS queries for your Route53 domain names, with our [integrated DNS server](/aws/tooling/dns-server).
-{{< callout >}}
-To follow the example below you must [configure your system DNS to use the LocalStack DNS server]({{< ref "user-guide/tools/dns-server#system-dns-configuration" >}}).
-{{< /callout >}}
+:::note
+To follow the example below you must [configure your system DNS to use the LocalStack DNS server](/aws/tooling/dns-server#system-dns-configuration).
+:::
### Query a DNS record
You can query the DNS record using `dig` via the built-in DNS server by running the following command:
-{{< command >}}
-$ dig @localhost test.example.com
-{{< / command >}}
+```bash
+dig @localhost test.example.com
+```
The following output would be retrieved:
@@ -101,7 +100,7 @@ test.example.com. 300 IN A 1.2.3.4
The DNS name `localhost.localstack.cloud`, along with its subdomains like `mybucket.s3.localhost.localstack.cloud`, serves an internal routing purpose within LocalStack.
It facilitates communication between a LocalStack compute environment (such as a Lambda function) and the LocalStack APIs, as well as your containerised applications with the LocalStack APIs.
-For example configurations, see the [Network Troubleshooting guide]({{< ref "references/network-troubleshooting/endpoint-url/#from-your-container" >}}).
+For example configurations, see the [Network Troubleshooting guide]().
For most use-cases, the default configuration of the internal LocalStack DNS name requires no modification.
It functions seamlessly in typical scenarios.
@@ -115,12 +114,12 @@ This can be accomplished using Route53.
Create a hosted zone for the domain `localhost.localstack.cloud` using the [`CreateHostedZone` API](https://docs.aws.amazon.com/Route53/latest/APIReference/API_CreateHostedZone.html) API.
Run the following command:
-{{< command >}}
-$ zone_id=$(awslocal route53 create-hosted-zone \
+```bash
+zone_id=$(awslocal route53 create-hosted-zone \
--name localhost.localstack.cloud \
--caller-reference r1 | jq -r .HostedZone.Id)
-$ echo $zone_id
-{{< / command >}}
+echo $zone_id
+```
The following output would be retrieved:
@@ -131,11 +130,11 @@ The following output would be retrieved:
You can now use the [`ChangeResourceRecordSets`](https://docs.aws.amazon.com/Route53/latest/APIReference/API_ChangeResourceRecordSets.html) API to create a record set for the domain `localhost.localstack.cloud` using the `zone_id` retrieved in the previous step.
Run the following command to accomplish this:
-{{< command >}}
-$ awslocal route53 change-resource-record-sets \
+```bash
+awslocal route53 change-resource-record-sets \
--hosted-zone-id $zone_id \
--change-batch '{"Changes":[{"Action":"CREATE","ResourceRecordSet":{"Name":"localhost.localstack.cloud","Type":"A","ResourceRecords":[{"Value":"5.6.7.8"}]}},{"Action":"CREATE","ResourceRecordSet":{"Name":"*.localhost.localstack.cloud","Type":"A","ResourceRecords":[{"Value":"5.6.7.8"}]}}]}'
-{{< / command >}}
+```
The following output would be retrieved:
@@ -151,10 +150,10 @@ The following output would be retrieved:
You can now verify that the DNS name `localhost.localstack.cloud` and its subdomains resolve to the IP address:
-{{< command >}}
-$ dig @127.0.0.1 bucket1.s3.localhost.localstack.cloud
-$ dig @127.0.0.1 localhost.localstack.cloud
-{{< / command >}}
+```bash
+dig @127.0.0.1 bucket1.s3.localhost.localstack.cloud
+dig @127.0.0.1 localhost.localstack.cloud
+```
The following output would be retrieved:
@@ -176,7 +175,7 @@ localhost.localstack.cloud. 300 IN A 5.6.7.8
The LocalStack Web Application provides a Route53 for creating hosted zones and to manage DNS entries.
You can access the Resource Browser by opening the LocalStack Web Application in your browser, navigating to the **Resources** section, and then clicking on **Route53** under the **Analytics** section.
-
+
The Resource Browser allows you to perform the following actions:
diff --git a/src/content/docs/aws/services/route53resolver.md b/src/content/docs/aws/services/route53resolver.md
index dfebf69c..9ba53178 100644
--- a/src/content/docs/aws/services/route53resolver.md
+++ b/src/content/docs/aws/services/route53resolver.md
@@ -1,6 +1,5 @@
---
title: "Route 53 Resolver"
-linkTitle: "Route 53 Resolver"
description: Get started with Route 53 Resolver on LocalStack
persistence: supported
tags: ["Free"]
@@ -13,7 +12,7 @@ Route 53 Resolver forwards DNS queries for domain names to the appropriate DNS s
Route 53 Resolver can be used to resolve domain names between your VPC and your network, and to resolve domain names between your VPCs.
LocalStack allows you to use the Route 53 Resolver endpoints in your local environment.
-The supported APIs are available on our [API coverage page]({{< ref "coverage_route53resolver" >}}), which provides information on the extent of Route 53 Resolver's integration with LocalStack.
+The supported APIs are available on our [API coverage page](), which provides information on the extent of Route 53 Resolver's integration with LocalStack.
## Getting started
@@ -26,15 +25,15 @@ We will demonstrate how to create a resolver endpoint, list the endpoints, and d
Fetch the default VPC ID using the following command:
-{{< command >}}
-$ VPC_ID=$(awslocal ec2 describe-vpcs --query 'Vpcs[?IsDefault==`true`].VpcId' --output text)
-{{< / command >}}
+```bash
+VPC_ID=$(awslocal ec2 describe-vpcs --query 'Vpcs[?IsDefault==`true`].VpcId' --output text)
+```
Fetch the default VPC's security group ID using the following command:
-{{< command >}}
-$ awslocal ec2 describe-subnets --filters Name=vpc-id,Values=$VPC_ID --query 'Subnets[].SubnetId'
-{{< / command >}}
+```bash
+awslocal ec2 describe-subnets --filters Name=vpc-id,Values=$VPC_ID --query 'Subnets[].SubnetId'
+```
You should see the following output:
@@ -49,34 +48,48 @@ You should see the following output:
]
```
-Choose two subnets from the list above and fetch the CIDR block of the subnets which tells you the range of IP addresses within it:
+Choose two subnets from the list above and fetch the CIDR block of the subnets which tells you the range of IP addresses within it. Let's fetch the CIDR block of the subnet `subnet-957d6ba6`:
+
+```bash
+awslocal ec2 describe-subnets --subnet-ids subnet-957d6ba6 --query 'Subnets[*].CidrBlock'
+```
+
+The following output would be retrieved:
-{{< command >}}
-$ awslocal ec2 describe-subnets --subnet-ids subnet-957d6ba6 --query 'Subnets[*].CidrBlock'
-
+```bash
[
"172.31.16.0/20"
]
-
-$ awslocal ec2 describe-subnets --subnet-ids subnet-bdd58a47 --query 'Subnets[*].CidrBlock'
-
+```
+
+Similarly, fetch the CIDR block of the subnet `subnet-bdd58a47`:
+
+```bash
+awslocal ec2 describe-subnets --subnet-ids subnet-bdd58a47 --query 'Subnets[*].CidrBlock'
+```
+
+The following output would be retrieved:
+
+```bash
[
"172.31.0.0/20"
]
-
-{{< / command >}}
+```
Save the CIDR blocks of the subnets as you will need them later.
Lastly fetch the security group ID of the default VPC:
-{{< command >}}
-$ awslocal ec2 describe-security-groups \
+```bash
+awslocal ec2 describe-security-groups \
--filters Name=vpc-id,Values=$VPC_ID \
--query 'SecurityGroups[0].GroupId'
-
+```
+
+The following output would be retrieved:
+
+```bash
sg-39936e572e797b360
-
-{{< / command >}}
+```
Save the security group ID as you will need it later.
@@ -114,10 +127,10 @@ Replace the `Ip` and `SubnetId` values with the CIDR blocks and subnet IDs you f
You can now use the [`CreateResolverEndpoint`](https://docs.aws.amazon.com/Route53/latest/APIReference/API_route53resolver_CreateResolverEndpoint.html) API to create an outbound resolver endpoint.
Run the following command:
-{{< command >}}
-$ awslocal route53resolver create-resolver-endpoint \
+```bash
+awslocal route53resolver create-resolver-endpoint \
--cli-input-json file://create-outbound-resolver-endpoint.json
-{{< / command >}}
+```
The following output would be retrieved:
@@ -147,9 +160,9 @@ The following output would be retrieved:
You can list the resolver endpoints using the [`ListResolverEndpoints`](https://docs.aws.amazon.com/Route53/latest/APIReference/API_route53resolver_ListResolverEndpoints.html) API.
Run the following command:
-{{< command >}}
-$ awslocal route53resolver list-resolver-endpoints
-{{< / command >}}
+```bash
+awslocal route53resolver list-resolver-endpoints
+```
The following output would be retrieved:
@@ -182,10 +195,10 @@ The following output would be retrieved:
You can delete the resolver endpoint using the [`DeleteResolverEndpoint`](https://docs.aws.amazon.com/Route53/latest/APIReference/API_route53resolver_DeleteResolverEndpoint.html) API.
Run the following command:
-{{< command >}}
-$ awslocal route53resolver delete-resolver-endpoint \
+```bash
+awslocal route53resolver delete-resolver-endpoint \
--resolver-endpoint-id rslvr-out-5d61abaff9de06b99
-{{< / command >}}
+```
Replace `rslvr-out-5d61abaff9de06b99` with the ID of the resolver endpoint you want to delete.
@@ -195,7 +208,7 @@ The LocalStack Web Application provides a Route53 Resolver for creating and mana
You can access the Resource Browser by opening the LocalStack Web Application in your browser, navigating to the **Resource Browser** section, and then clicking on **Route53** under the **Analytics** section.
Navigate to the **Resolver Endpoints** tab to view the resolver endpoints.
-
+
The Resource Browser allows you to perform the following actions:
diff --git a/src/content/docs/aws/services/s3.md b/src/content/docs/aws/services/s3.mdx
similarity index 84%
rename from src/content/docs/aws/services/s3.md
rename to src/content/docs/aws/services/s3.mdx
index 118d83d4..430c4e5b 100644
--- a/src/content/docs/aws/services/s3.md
+++ b/src/content/docs/aws/services/s3.mdx
@@ -1,6 +1,5 @@
---
title: "Simple Storage Service (S3)"
-linkTitle: "Simple Storage Service (S3)"
description: Get started with Amazon S3 on LocalStack
persistence: supported
tags: ["Free"]
@@ -14,13 +13,13 @@ Each object or file within S3 encompasses essential attributes such as a unique
S3 can store unlimited objects, allowing you to store, retrieve, and manage your data in a highly adaptable and reliable manner.
LocalStack allows you to use the S3 APIs in your local environment to create new buckets, manage your S3 objects, and test your S3 configurations locally.
-The supported APIs are available on our [API coverage page]({{< ref "coverage_s3" >}}), which provides information on the extent of S3's integration with LocalStack.
+The supported APIs are available on our [API coverage page](), which provides information on the extent of S3's integration with LocalStack.
## Getting started
This guide is designed for users new to S3 and assumes basic knowledge of the AWS CLI and our [`awslocal`](https://github.com/localstack/awscli-local) wrapper script.
-Start your LocalStack container using your [preferred method]({{< ref "getting-started/installation" >}}).
+Start your LocalStack container using your preferred method.
We will demonstrate how you can create an S3 bucket, manage S3 objects, and generate pre-signed URLs for S3 objects.
### Create an S3 bucket
@@ -28,16 +27,16 @@ We will demonstrate how you can create an S3 bucket, manage S3 objects, and gene
You can create an S3 bucket using the [`CreateBucket`](https://docs.aws.amazon.com/cli/latest/reference/s3api/create-bucket.html) API.
Run the following command to create an S3 bucket named `sample-bucket`:
-{{< command >}}
-$ awslocal s3api create-bucket --bucket sample-bucket
-{{< / command >}}
+```bash
+awslocal s3api create-bucket --bucket sample-bucket
+```
You can list your S3 buckets using the [`ListBuckets`](https://docs.aws.amazon.com/cli/latest/reference/s3api/list-buckets.html) API.
Run the following command to list your S3 buckets:
-{{< command >}}
-$ awslocal s3api list-buckets
-{{< / command >}}
+```bash
+awslocal s3api list-buckets
+```
On successful creation of the S3 bucket, you will see the following output:
@@ -62,20 +61,20 @@ To upload a file to your S3 bucket, you can use the [`PutObject`](https://docs.a
Download a random image from the internet and save it as `image.jpg`.
Run the following command to upload the file to your S3 bucket:
-{{< command >}}
-$ awslocal s3api put-object \
+```bash
+awslocal s3api put-object \
--bucket sample-bucket \
--key image.jpg \
--body image.jpg
-{{< / command >}}
+```
You can list the objects in your S3 bucket using the [`ListObjects`](https://docs.aws.amazon.com/cli/latest/reference/s3api/list-objects.html) API.
Run the following command to list the objects in your S3 bucket:
-{{< command >}}
-$ awslocal s3api list-objects \
+```bash
+awslocal s3api list-objects \
--bucket sample-bucket
-{{< / command >}}
+```
If your image has been uploaded successfully, you will see the following output:
@@ -99,14 +98,17 @@ If your image has been uploaded successfully, you will see the following output:
Run the following command to upload a file named `index.html` to your S3 bucket:
-{{< command >}}
+```bash
+awslocal s3api put-object --bucket sample-bucket --key index.html --body index.html
+```
-$ awslocal s3api put-object --bucket sample-bucket --key index.html --body index.html
+The following output would be retrieved:
+```bash
{
"ETag": "\"d41d8cd98f00b204e9800998ecf8427e\""
}
-{{< / command >}}
+```
### Generate a pre-signed URL for S3 object
@@ -115,9 +117,9 @@ Pre-signed URL allows anyone to retrieve the S3 object with an HTTP GET request.
Run the following command to generate a pre-signed URL for your S3 object:
-{{< command >}}
-$ awslocal s3 presign s3://sample-bucket/image.jpg
-{{< / command >}}
+```bash
+awslocal s3 presign s3://sample-bucket/image.jpg
+```
You will see a generated pre-signed URL for your S3 object.
You can use [curl](https://curl.se/) or [`wget`](https://www.gnu.org/software/wget/) to retrieve the S3 object using the pre-signed URL.
@@ -143,13 +145,13 @@ By default, most SDKs will try to use **Virtual-Hosted style** requests and prep
However, if the endpoint is not prefixed by `s3.`, LocalStack will not be able to understand the request and it will most likely result in an error.
You can either change the endpoint to an S3-specific one, or configure your SDK to use **Path style** requests instead.
-Check out our [SDK documentation]({{< ref "sdks" >}}) to learn how you can configure AWS SDKs to access LocalStack and S3.
+Check out our [SDK documentation](/aws/integrations/aws-sdks) to learn how you can configure AWS SDKs to access LocalStack and S3.
-{{< callout "tip" >}}
+:::note
While using [AWS SDKs](https://aws.amazon.com/developer/tools/#SDKs), you would need to configure the `ForcePathStyle` parameter to `true` in the S3 client configuration to use **Path style** requests.
If you want to use virtual host addressing of buckets, you can remove `ForcePathStyle` from the configuration.
-The `ForcePathStyle` parameter name can vary between SDK and languages, please check our [SDK documentation]({{< ref "sdks" >}})
-{{< /callout >}}
+The `ForcePathStyle` parameter name can vary between SDK and languages, please check our [SDK documentation](/aws/integrations/aws-sdks)
+:::
If your endpoint is not prefixed with `s3.`, all requests are treated as **Path style** requests.
Using the `s3.localhost.localstack.cloud` endpoint URL is recommended for all requests aimed at S3.
@@ -168,12 +170,17 @@ Follow this step-by-step guide to configure CORS rules on your S3 bucket.
Run the following command on your terminal to create your S3 bucket:
-{{< command >}}
-$ awslocal s3api create-bucket --bucket cors-bucket
+```bash
+awslocal s3api create-bucket --bucket cors-bucket
+```
+
+The following output would be retrieved:
+
+```bash
{
"Location": "/cors-bucket"
}
-{{< / command >}}
+```
Next, create a JSON file with the CORS configuration.
The file should have the following format:
@@ -191,22 +198,22 @@ The file should have the following format:
}
```
-{{< callout >}}
+:::note
Note that this configuration is a sample, and you can tailor it to fit your needs better, for example, restricting the **AllowedHeaders** to specific ones.
-{{< /callout >}}
+:::
Save the file locally with a name of your choice, for example, `cors-config.json`.
Run the following command to apply the CORS configuration to your S3 bucket:
-{{< command >}}
-$ awslocal s3api put-bucket-cors --bucket cors-bucket --cors-configuration file://cors-config.json
-{{< / command >}}
+```bash
+awslocal s3api put-bucket-cors --bucket cors-bucket --cors-configuration file://cors-config.json
+```
You can further verify that the CORS configuration was applied successfully by running the following command:
-{{< command >}}
-$ awslocal s3api get-bucket-cors --bucket cors-bucket
-{{< / command >}}
+```bash
+awslocal s3api get-bucket-cors --bucket cors-bucket
+```
On applying the configuration successfully, you should see the same JSON configuration file you created earlier.
Your S3 bucket is configured to allow cross-origin resource sharing, and if you try to send requests from your local application running on [localhost:3000](http://localhost:3000), they should be successful.
@@ -233,10 +240,10 @@ We can edit the JSON file `cors-config.json` you created earlier with the follow
You can now run the same steps as before to update the CORS configuration and verify if it is applied correctly:
-{{< command >}}
-$ awslocal s3api put-bucket-cors --bucket cors-bucket --cors-configuration file://cors-config.json
-$ awslocal s3api get-bucket-cors --bucket cors-bucket
-{{< / command >}}
+```bash
+awslocal s3api put-bucket-cors --bucket cors-bucket --cors-configuration file://cors-config.json
+awslocal s3api get-bucket-cors --bucket cors-bucket
+```
You can try again to upload files in your bucket from the [LocalStack Web Application](https://app.localstack.cloud) and it should work.
@@ -245,17 +252,22 @@ You can try again to upload files in your bucket from the [LocalStack Web Applic
LocalStack provides a Docker image for S3, which you can use to run S3 in a Docker container.
The image is available on [Docker Hub](https://hub.docker.com/r/localstack/localstack) and can be pulled using the following command:
-{{< command >}}
-$ docker pull localstack/localstack:s3-latest
-{{< / command >}}
+```bash
+docker pull localstack/localstack:s3-latest
+```
The S3 Docker image only supports the S3 APIs and does not include other services like Lambda, DynamoDB, etc. You can run the S3 Docker image using any of the following commands:
-{{< tabpane lang="shell" >}}
-{{< tab header="LocalStack CLI" lang="shell" >}}
+import { Tabs, TabItem } from '@astrojs/starlight/components';
+
+
+
+```bash
IMAGE_NAME=localstack/localstack:s3-latest localstack start
-{{< /tab >}}
-{{< tab header="Docker Compose" lang="yml" >}}
+```
+
+
+```yaml
services:
localstack:
container_name: "${LOCALSTACK_DOCKER_NAME:-localstack-main}"
@@ -267,22 +279,25 @@ services:
volumes:
- "${LOCALSTACK_VOLUME_DIR:-./volume}:/var/lib/localstack"
- "/var/run/docker.sock:/var/run/docker.sock"
-{{< /tab >}}
-{{< tab header="Docker" lang="shell" >}}
+```
+
+
+```bash
docker run \
--rm \
-p 4566:4566 \
localstack/localstack:s3-latest
-{{< /tab >}}
-{{< /tabpane >}}
+```
+
+
The S3 Docker image has similar parity with the S3 APIs supported by LocalStack Docker image.
-You can use similar [configuration options]({{< ref "configuration/#s3" >}}) to alter the behaviour of the S3 Docker image, such as `DEBUG` or `S3_SKIP_SIGNATURE_VALIDATION`.
+You can use similar [configuration options](/aws/capabilities/config/configuration/#s3) to alter the behaviour of the S3 Docker image, such as `DEBUG` or `S3_SKIP_SIGNATURE_VALIDATION`.
-{{< callout >}}
+:::note
The S3 Docker image does not support persistence, and all data is lost when the container is stopped.
To use persistence or save the container state as a Cloud Pod, you need to use the [`localstack/localstack-pro`](https://hub.docker.com/r/localstack/localstack-pro) image.
-{{< /callout >}}
+:::
## SSE-C Encryption
@@ -303,10 +318,10 @@ However, LocalStack does not support the actual encryption and decryption of obj
## Resource Browser
-The LocalStack Web Application provides a [Resource Browser]({{< ref "resource-browser" >}}) for managing S3 buckets & configurations.
+The LocalStack Web Application provides a [Resource Browser](/aws/capabilities/web-app/resource-browser) for managing S3 buckets & configurations.
You can access the Resource Browser by opening the LocalStack Web Application in your browser, navigating to the **Resources** section, and then clicking on **S3** under the **Storage** section.
-
+
The Resource Browser allows you to perform the following actions:
@@ -324,4 +339,4 @@ The following code snippets and sample applications provide practical examples o
- [Serverless Transcription application using Transcribe, S3, Lambda, SQS, and SES](https://github.com/localstack/sample-transcribe-app)
- [Query data in S3 Bucket with Amazon Athena, Glue Catalog & CloudFormation](https://github.com/localstack/query-data-s3-athena-glue-sample)
- [Serverless Image Resizer with Lambda, S3, SNS, and SES](https://github.com/localstack/serverless-image-resizer)
-- [Host a static website locally using Simple Storage Service (S3) and Terraform with LocalStack]({{< ref "s3-static-website-terraform" >}})
+- [Host a static website locally using Simple Storage Service (S3) and Terraform with LocalStack]()
diff --git a/src/content/docs/aws/services/sagemaker.md b/src/content/docs/aws/services/sagemaker.md
index 6ff02246..5bafd5ab 100644
--- a/src/content/docs/aws/services/sagemaker.md
+++ b/src/content/docs/aws/services/sagemaker.md
@@ -1,6 +1,5 @@
---
title: "SageMaker"
-linkTitle: "SageMaker"
description: Get started with SageMaker on LocalStack
tags: ["Ultimate"]
---
@@ -11,13 +10,13 @@ Amazon SageMaker is a fully managed service provided by Amazon Web Services (AWS
It streamlines the machine learning development process, reduces the time and effort required to build and deploy models, and offers the scalability and flexibility needed for large-scale machine learning projects in the AWS cloud.
LocalStack provides a local version of the SageMaker API, which allows running jobs to create machine learning models (e.g., using PyTorch) and to deploy them.
-The supported APIs are available on our [API coverage page]({{< ref "coverage_sagemaker" >}}), which provides information on the extent of Sagemaker's integration with LocalStack.
+The supported APIs are available on our [API coverage page](), which provides information on the extent of Sagemaker's integration with LocalStack.
-{{< callout >}}
+:::note
LocalStack supports custom-built models in SageMaker.
You can push your Docker image to LocalStack's Elastic Container Registry (ECR) and use it in SageMaker.
LocalStack will use the local ECR image to create a SageMaker model.
-{{< /callout >}}
+:::
## Getting started
@@ -29,46 +28,46 @@ We will demonstrate an application illustrating running a machine learning job u
- Creates a SageMaker Endpoint for accessing the model
- Invokes the endpoint directly on the container via Boto3
-{{< callout >}}
+:::note
SageMaker is a fairly comprehensive API for now.
Currently a subset of the functionality is provided locally, but new features are being added on a regular basis.
-{{< /callout >}}
+:::
### Download the sample application
You can download the sample application from [GitHub](https://github.com/localstack/localstack-pro-samples/tree/master/sagemaker-inference) or by running the following commands:
-{{< command >}}
-$ mkdir localstack-samples && cd localstack-samples
-$ git init
-$ git remote add origin -f git@github.com:localstack/localstack-pro-samples.git
-$ git config core.sparseCheckout true
-$ echo sagemaker-inference >> .git/info/sparse-checkout
-$ git pull origin master
-{{< /command >}}
+```bash
+mkdir localstack-samples && cd localstack-samples
+git init
+git remote add origin -f git@github.com:localstack/localstack-pro-samples.git
+git config core.sparseCheckout true
+echo sagemaker-inference >> .git/info/sparse-checkout
+git pull origin master
+```
### Set up the environment
After downloading the sample application, you can set up your Docker Client to pull the AWS Deep Learning images by running the following command:
-{{< command >}}
-$ aws ecr get-login-password --region us-east-1 | docker login --username AWS --password-stdin 763104351884.dkr.ecr.us-east-1.amazonaws.com
-{{< /command >}}
+```bash
+aws ecr get-login-password --region us-east-1 | docker login --username AWS --password-stdin 763104351884.dkr.ecr.us-east-1.amazonaws.com
+```
Since the images are quite large (several gigabytes), it's a good idea to pull the images using Docker in advance.
-{{< command >}}
-$ docker pull 763104351884.dkr.ecr.us-east-1.amazonaws.com/pytorch-inference:1.5.0-cpu-py3
-{{< /command >}}
+```bash
+docker pull 763104351884.dkr.ecr.us-east-1.amazonaws.com/pytorch-inference:1.5.0-cpu-py3
+```
### Run the sample application
Start your LocalStack container using your preferred method.
Run the sample application by executing the following command:
-{{< command >}}
-$ python3 main.,py
-{{< /command >}}
+```bash
+python3 main.py
+```
You should see the following output:
@@ -92,19 +91,19 @@ You can also invoke a serverless endpoint, by navigating to `main.py` and uncomm
## Resource Browser
-The LocalStack Web Application provides a [Resource Browser]({{< ref "resource-browser" >}}) for managing Lambda resources.
+The LocalStack Web Application provides a [Resource Browser](/aws/capabilities/web-app/resource-browser) for managing Sagemaker resources.
You can access the Resource Browser by opening the LocalStack Web Application in your browser, navigating to the **Resources** section, and then clicking on **Sagemaker** under the **Compute** section.
The Resource Browser displays Models, Endpoint Configurations and Endpoint.
You can click on individual resources to view their details.
-
+
The Resource Browser allows you to perform the following actions:
- **Create and Remove Models**: You can remove existing model and create a new model with the required configuration
-
+ 
- **Endpoint Configurations & Endpoints**: You can create endpoints from the resource browser that hosts your deployed machine learning model.
You can also create endpoint configuration that specifies the type and number of instances that will be used to serve your model on an endpoint.
diff --git a/src/content/docs/aws/services/scheduler.md b/src/content/docs/aws/services/scheduler.md
index 09ebf360..d81aeeaa 100644
--- a/src/content/docs/aws/services/scheduler.md
+++ b/src/content/docs/aws/services/scheduler.md
@@ -1,6 +1,5 @@
---
title: "EventBridge Scheduler"
-linkTitle: "EventBridge Scheduler"
description: Get started with EventBridge Scheduler on LocalStack
tags: ["Free"]
---
@@ -12,7 +11,7 @@ You can use EventBridge Scheduler to create schedules that run at a specific tim
You can also use EventBridge Scheduler to create schedules that run within a flexible time window.
LocalStack allows you to use the Scheduler APIs in your local environment to create and run schedules.
-The supported APIs are available on our [API coverage page]({{< ref "coverage_scheduler" >}}), which provides information on the extent of EventBridge Scheduler's integration with LocalStack.
+The supported APIs are available on our [API coverage page](), which provides information on the extent of EventBridge Scheduler's integration with LocalStack.
## Getting started
@@ -26,18 +25,18 @@ We will demonstrate how you can create a new schedule, list all schedules, and t
You can create a new SQS queue using the [`CreateQueue`](https://docs.aws.amazon.com/AWSSimpleQueueService/latest/APIReference/API_CreateQueue.html) API.
Run the following command to create a new SQS queue:
-{{< command >}}
-$ awslocal sqs create-queue --queue-name local-notifications
-{{< /command >}}
+```bash
+awslocal sqs create-queue --queue-name local-notifications
+```
You can fetch the Queue ARN using the [`GetQueueAttributes`](https://docs.aws.amazon.com/AWSSimpleQueueService/latest/APIReference/API_GetQueueAttributes.html) API.
Run the following command to fetch the Queue ARN by specifying the Queue URL:
-{{< command >}}
-$ awslocal sqs get-queue-attributes \
+```bash
+awslocal sqs get-queue-attributes \
--queue-url http://sqs.us-east-1.localhost.localstack.cloud:4566/000000000000/local-notifications \
--attribute-names All
-{{< /command >}}
+```
Save the Queue ARN for later use.
@@ -46,13 +45,13 @@ Save the Queue ARN for later use.
You can create a new schedule using the [`CreateSchedule`](https://docs.aws.amazon.com/eventbridge/latest/APIReference/API_CreateSchedule.html) API.
Run the following command to create a new schedule:
-{{< command >}}
-$ awslocal scheduler create-schedule \
+```bash
+awslocal scheduler create-schedule \
--name sqs-templated-schedule \
--schedule-expression 'rate(5 minutes)' \
--target '{"RoleArn": "arn:aws:iam::000000000000:role/schedule-role", "Arn":"arn:aws:sqs:us-east-1:000000000000:local-notifications", "Input": "test" }' \
--flexible-time-window '{ "Mode": "OFF"}'
-{{< /command >}}
+```
The following output is displayed:
@@ -67,9 +66,9 @@ The following output is displayed:
You can list all schedules using the [`ListSchedules`](https://docs.aws.amazon.com/eventbridge/latest/APIReference/API_ListSchedules.html) API.
Run the following command to list all schedules:
-{{< command >}}
-$ awslocal scheduler list-schedules
-{{< /command >}}
+```bash
+awslocal scheduler list-schedules
+```
The following output is displayed:
@@ -96,19 +95,18 @@ The following output is displayed:
You can tag a schedule using the [`TagResource`](https://docs.aws.amazon.com/eventbridge/latest/APIReference/API_TagResource.html) API.
Run the following command to tag a schedule:
-{{< command >}}
-$ awslocal scheduler tag-resource \
+```bash
+awslocal scheduler tag-resource \
--resource-arn arn:aws:scheduler:us-east-1:000000000000:schedule/default/sqs-templated-schedule \
--tags Key=Name,Value=Test
-{{< /command >}}
+```
You can view the tags associated with a schedule using the [`ListTagsForResource`](https://docs.aws.amazon.com/eventbridge/latest/APIReference/API_ListTagsForResource.html) API.
Run the following command to list the tags associated with a schedule:
-{{< command >}}
-$ awslocal scheduler list-tags-for-resource \
- --resource-arn arn:aws:scheduler:us-east-1:000000000000:schedule/default/sqs-templated-schedule
-{{< /command >}}
+```bash
+awslocal scheduler list-tags-for-resource \
+ --resource-arn arn:aws:scheduler:us-east-1:00000000
The following output is displayed:
diff --git a/src/content/docs/aws/services/secretsmanager.md b/src/content/docs/aws/services/secretsmanager.md
index bc05e433..16e93d04 100644
--- a/src/content/docs/aws/services/secretsmanager.md
+++ b/src/content/docs/aws/services/secretsmanager.md
@@ -1,6 +1,5 @@
---
title: "Secrets Manager"
-linkTitle: "Secrets Manager"
description: Get started with Secrets Manager on LocalStack
persistence: supported
tags: ["Free"]
@@ -13,7 +12,7 @@ Secrets Manager integrates seamlessly with AWS services, making it easier to man
Secrets Manager supports automatic secret rotation, replacing long-term secrets with short-term ones to mitigate the risk of compromise without requiring application updates.
LocalStack allows you to use the Secrets Manager APIs in your local environment to manage, retrieve, and rotate secrets.
-The supported APIs are available on our [API coverage page]({{< ref "coverage_secretsmanager" >}}), which provides information on the extent of Secrets Manager's integration with LocalStack.
+The supported APIs are available on our [API coverage page](), which provides information on the extent of Secrets Manager's integration with LocalStack.
## Getting started
@@ -26,52 +25,52 @@ We will demonstrate how to create a secret, get the secret value, and rotate the
Before your create a secret, create a file named `secrets.json` and add the following content:
-{{}}
-$ touch secrets.json
-$ cat > secrets.json << EOF
+```bash
+touch secrets.json
+cat > secrets.json << EOF
{
"username": "admin",
"password": "password"
}
EOF
-{{ }}
+```
You can now create a secret using the [`CreateSecret`](https://docs.aws.amazon.com/secretsmanager/latest/apireference/API_CreateSecret.html) API.
Execute the following command to create a secret named `test-secret`:
-{{}}
-$ awslocal secretsmanager create-secret \
+```bash
+awslocal secretsmanager create-secret \
--name test-secret \
--description "LocalStack Secret" \
--secret-string file://secrets.json
-{{ }}
+```
Upon successful execution, the output will provide you with the ARN of the newly created secret.
This identifier will be useful for further operations or integrations.
The following output would be retrieved:
-{{}}
+```bash
{
"ARN": "arn:aws:secretsmanager:us-east-1:000000000000:secret:test-secret-pyfjVP",
"Name": "test-secret",
"VersionId": "a50c6752-3343-4eb0-acf3-35c74f00f707"
}
-{{ }}
+```
### Describe the secret
To retrieve the details of the secret you created earlier, you can use the [`DescribeSecret`](https://docs.aws.amazon.com/secretsmanager/latest/apireference/API_DescribeSecret.html) API.
Execute the following command:
-{{}}
-$ awslocal secretsmanager describe-secret \
+```bash
+awslocal secretsmanager describe-secret \
--secret-id test-secret
-{{ }}
+```
The following output would be retrieved:
-{{}}
+```bash
{
"ARN": "arn:aws:secretsmanager:us-east-1:000000000000:secret:test-secret-pyfjVP",
"Name": "test-secret",
@@ -84,29 +83,29 @@ The following output would be retrieved:
},
"CreatedDate": 1692882479.857329
}
-{{ }}
+```
You can also get a list of the secrets available in your local environment that have **Secret** in the name using the [`ListSecrets`](https://docs.aws.amazon.com/secretsmanager/latest/apireference/API_ListSecrets.html) API.
Execute the following command:
-{{}}
-$ awslocal secretsmanager list-secrets \
+```bash
+awslocal secretsmanager list-secrets \
--filters Key=name,Values=Secret
-{{ }}
+```
### Get the secret value
To retrieve the value of the secret you created earlier, you can use the [`GetSecretValue`](https://docs.aws.amazon.com/secretsmanager/latest/apireference/API_GetSecretValue.html) API.
Execute the following command:
-{{}}
-$ awslocal secretsmanager get-secret-value \
+```bash
+awslocal secretsmanager get-secret-value \
--secret-id test-secret
-{{ }}
+```
The following output would be retrieved:
-{{}}
+```bash
{
"ARN": "arn:aws:secretsmanager:us-east-1:000000000000:secret:test-secret-pyfjVP",
"Name": "test-secret",
@@ -117,16 +116,16 @@ The following output would be retrieved:
],
"CreatedDate": 1692882479.857329
}
-{{ }}
+```
You can tag your secret using the [`TagResource`](https://docs.aws.amazon.com/secretsmanager/latest/apireference/API_TagResource.html) API.
Execute the following command:
-{{}}
-$ awslocal secretsmanager tag-resource \
+```bash
+awslocal secretsmanager tag-resource \
--secret-id test-secret \
--tags Key=Environment,Value=Development
-{{ }}
+```
### Rotate the secret
@@ -136,15 +135,15 @@ You can copy the code from a [Secrets Manager template](https://docs.aws.amazon.
Zip the Lambda function and create a Lambda function using the [`CreateFunction`](https://docs.aws.amazon.com/lambda/latest/dg/API_CreateFunction.html) API.
Execute the following command:
-{{}}
-$ zip my-function.zip lambda_function.py
-$ awslocal lambda create-function \
+```bash
+zip my-function.zip lambda_function.py
+awslocal lambda create-function \
--function-name my-rotation-function \
--runtime python3.9 \
--zip-file fileb://my-function.zip \
--handler my-handler \
--role arn:aws:iam::000000000000:role/service-role/rotation-lambda-role
-{{ }}
+```
You can now set a resource policy on the Lambda function to allow Secrets Manager to invoke it using [`AddPermission`](https://docs.aws.amazon.com/lambda/latest/dg/API_AddPermission.html) API.
@@ -152,30 +151,30 @@ Please note that this is not required with the default LocalStack settings, sinc
Execute the following command:
-{{}}
-$ awslocal lambda add-permission \
+```bash
+awslocal lambda add-permission \
--function-name my-rotation-function \
--action lambda:InvokeFunction \
--statement-id SecretsManager \
--principal secretsmanager.amazonaws.com
-{{ }}
+```
You can now create a rotation schedule for the secret using the [`RotateSecret`](https://docs.aws.amazon.com/secretsmanager/latest/apireference/API_RotateSecret.html) API.
Execute the following command:
-{{}}
-$ awslocal secretsmanager rotate-secret \
+```bash
+awslocal secretsmanager rotate-secret \
--secret-id MySecret \
--rotation-lambda-arn arn:aws:lambda:us-east-1:000000000000:function:my-rotation-function \
--rotation-rules "{\"ScheduleExpression\": \"cron(0 16 1,15 *?*)\", \"Duration\": \"2h\"}"
-{{ }}
+```
## Resource Browser
The LocalStack Web Application provides a Resource Browser for managing secrets in your local environment.
You can access the Resource Browser by opening the LocalStack Web Application in your browser, navigating to the **Resources** section, and then clicking on **Secrets Manager** under the **Security Identity Compliance** section.
-
+
The Resource Browser allows you to perform the following actions:
diff --git a/src/content/docs/aws/services/serverlessrepo.md b/src/content/docs/aws/services/serverlessrepo.md
index c7f3f860..93ab1d1c 100644
--- a/src/content/docs/aws/services/serverlessrepo.md
+++ b/src/content/docs/aws/services/serverlessrepo.md
@@ -1,8 +1,6 @@
---
title: "Serverless Application Repository"
-linkTitle: "Serverless Application Repository"
-description: >
- Get started with Serverless Application Repository on LocalStack
+description: Get started with Serverless Application Repository on LocalStack
tags: ["Ultimate"]
---
@@ -13,7 +11,7 @@ Using Serverless Application Repository, developers can build & publish applicat
Serverless Application Repository provides a user-friendly interface to search, filter, and browse through a diverse catalog of serverless applications.
LocalStack allows you to use the Serverless Application Repository APIs in your local environment to create, update, delete, and list serverless applications and components.
-The supported APIs are available on our [API coverage page]({{< ref "coverage_serverlessrepo" >}}), which provides information on the extent of Serverless Application Repository's integration with LocalStack.
+The supported APIs are available on our [API coverage page](), which provides information on the extent of Serverless Application Repository's integration with LocalStack.
## Getting started
@@ -26,9 +24,9 @@ We will demonstrate how to create a SAM application that comprises a Hello World
To create a sample SAM application using the `samlocal` CLI, execute the following command:
-{{< command >}}
-$ samlocal init --runtime python3.9
-{{< /command >}}
+```bash
+samlocal init --runtime python3.9
+```
This command downloads a sample SAM application template and generates a `template.yml` file in the current directory.
The template includes a Lambda function and an API Gateway endpoint that supports a `GET` operation.
@@ -53,11 +51,11 @@ Metadata:
Once the Metadata section is added, run the following command to create the Lambda function deployment package and the packaged SAM template:
-{{< command >}}
+```bash
samlocal package \
--template-file template.yaml \
--output-template-file packaged.yaml
-{{< /command >}}
+```
This command generates a `packaged.yaml` file in the current directory containing the packaged SAM template.
The packaged template will be similar to the original template file, but it will now include a `CodeUri` property for the Lambda function, as shown in the example below:
@@ -74,9 +72,9 @@ Resources:
To retrieve the Application ID for your SAM application, you can utilize the [`awslocal`](https://github.com/localstack/awscli-local) CLI by running the following command:
-{{< command >}}
+```bash
awslocal serverlessrepo list-applications
-{{< /command >}}
+```
In the output, you will observe the `ApplicationId` property in the output, which is the Application ID for your SAM application, along with other properties such as the `Author`, `Description`, `Name`, `SpdxLicenseId`, and `Version` providing further details about your application.
@@ -84,20 +82,20 @@ In the output, you will observe the `ApplicationId` property in the output, whic
To publish your application to the Serverless Application Repository, execute the following command:
-{{< command >}}
+```bash
samlocal publish \
--template packaged.yaml \
--region us-east-1
-{{< /command >}}
+```
### Delete the SAM application
To remove a SAM application from the Serverless Application Repository, you can use the following command:
-{{< command >}}
+```bash
awslocal serverlessrepo delete-application \
--application-id
-{{< /command >}}
+```
Replace `` with the Application ID of your SAM application that you retrieved in the previous step.
diff --git a/src/content/docs/aws/services/servicediscovery.md b/src/content/docs/aws/services/servicediscovery.md
index acc5c8f0..078714cf 100644
--- a/src/content/docs/aws/services/servicediscovery.md
+++ b/src/content/docs/aws/services/servicediscovery.md
@@ -1,8 +1,6 @@
---
title: "Service Discovery"
-linkTitle: "Service Discovery"
-description: >
- Get started with Service Discovery on LocalStack
+description: Get started with Service Discovery on LocalStack
tags: ["Ultimate"]
---
@@ -13,7 +11,7 @@ Service Discovery allows for a centralized mechanism for dynamically registering
Service discovery uses Cloud Map API actions to manage HTTP and DNS namespaces for services, enabling automatic registration and discovery of services running in the cluster.
LocalStack allows you to use the Service Discovery APIs in your local environment to monitor and manage your services across various environments and network topologies.
-The supported APIs are available on our [API coverage page]({{< ref "coverage_servicediscovery" >}}), which provides information on the extent of Service Discovery's integration with LocalStack.
+The supported APIs are available on our [API coverage page](), which provides information on the extent of Service Discovery's integration with LocalStack.
## Getting Started
@@ -29,11 +27,11 @@ This API allows you to define a custom name for your namespace and specify the V
To create the private Cloud Map service discovery namespace, execute the following command:
-{{< command >}}
-$ awslocal servicediscovery create-private-dns-namespace \
+```bash
+awslocal servicediscovery create-private-dns-namespace \
--name tutorial \
--vpc
-{{< /command >}}
+```
Ensure that you replace `` with the actual ID of the VPC you intend to use for the namespace.
Upon running this command, you will receive an output containing an `OperationId`.
@@ -41,10 +39,10 @@ This identifier can be used to check the status of the operation.
To verify the status of the operation, execute the following command:
-{{< command >}}
-$ awslocal servicediscovery get-operation \
+```bash
+awslocal servicediscovery get-operation \
--operation-id
-{{< /command >}}
+```
The output will consist of a `NAMESPACE` ID, which you will need to create a service within the namespace.
@@ -55,12 +53,12 @@ This service represents a specific component or resource in your application.
To create a service within the namespace, execute the following command:
-{{< command >}}
-$ awslocal servicediscovery create-service \
+```bash
+awslocal servicediscovery create-service \
--name myapplication \
--dns-config "NamespaceId="",DnsRecords=[{Type="A",TTL="300"}]" \
--health-check-custom-config FailureThreshold=1
-{{< /command >}}
+```
Upon successful execution, the output will provide you with the Service ID and the Amazon Resource Name (ARN) of the newly created service.
These identifiers will be useful for further operations or integrations.
@@ -72,10 +70,10 @@ To integrate the service you created earlier with an ECS (Elastic Container Serv
Start by creating an ECS cluster using the [`CreateCluster`](https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_CreateCluster.html) API.
Execute the following command:
-{{< command >}}
-$ awslocal ecs create-cluster \
+```bash
+awslocal ecs create-cluster \
--cluster-name tutorial
-{{< /command >}}
+```
### Register a task definition
@@ -120,10 +118,10 @@ Create a file named `fargate-task.json` and add the following content:
Register the task definition using the [`RegisterTaskDefinition`](https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_RegisterTaskDefinition.html) API.
Execute the following command:
-{{< command >}}
-$ awslocal ecs register-task-definition \
+```bash
+awslocal ecs register-task-definition \
--cli-input-json file://fargate-task.json
-{{< /command >}}
+```
### Create an ECS service
@@ -131,20 +129,26 @@ To create an ECS service, you will need to retrieve the `securityGroups` and `su
You can obtain this information by using the [`DescribeVpcs`](https://docs.aws.amazon.com/vpc/latest/APIReference/API_DescribeVpcs.html) API.
Execute the following command to retrieve the details of all VPCs:
-{{< command >}}
-$ awslocal ec2 describe-vpcs
-{{< /command >}}
+```bash
+awslocal ec2 describe-vpcs
+```
The output will include a list of VPCs.
Locate the VPC that was used to create the Cloud Map namespace and make a note of its `VpcId` value.
Next, execute the following commands to retrieve the `securityGroups` and `subnets` associated with the VPC:
-{{< command >}}
-$ awslocal ec2 describe-security-groups --filters Name=vpc-id,Values=vpc- --query 'SecurityGroups[*].[GroupId, GroupName]' --output text
+```bash
+awslocal ec2 describe-security-groups \
+ --filters Name=vpc-id,Values=vpc- \
+ --query 'SecurityGroups[*].[GroupId, GroupName]' \
+ --output text
-$ awslocal ec2 describe-subnets --filters Name=vpc-id,Values=vpc- --query 'Subnets[*].[SubnetId, CidrBlock]' --output text
-{{< /command >}}
+awslocal ec2 describe-subnets \
+ --filters Name=vpc-id,Values=vpc- \
+ --query 'Subnets[*].[SubnetId, CidrBlock]' \
+ --output text
+```
Replace `` with the actual VpcId value of the VPC you identified earlier.
Make a note of the `GroupId` and `SubnetId` values.
@@ -177,20 +181,20 @@ Create a new file named `ecs-service-discovery.json` and add the following conte
Create your ECS service using the [`CreateService`](https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_CreateService.html) API.
Execute the following command:
-{{< command >}}
-$ awslocal ecs create-service \
+```bash
+awslocal ecs create-service \
--cli-input-json file://ecs-service-discovery.json
-{{< /command >}}
+```
### Verify the service
You can use the Service Discovery service ID to verify that the service was created successfully.
Execute the following command:
-{{< command >}}
-$ awslocal servicediscovery list-instances \
+```bash
+awslocal servicediscovery list-instances \
--service-id
-{{< /command >}}
+```
The output will consist of the resource ID, and you can further use the [`DiscoverInstances`](https://docs.aws.amazon.com/cloud-map/latest/api/API_DiscoverInstances.html) API.
This API allows you to query the DNS records associated with the service and perform various operations.
@@ -212,31 +216,33 @@ Both `list-services` and `list-namespaces` support `EQ` (default condition if no
Both conditions and only support a single value to match by.
The following examples demonstrate how to use filters with these operations:
-{{< command >}}
-$ awslocal servicediscovery list-namespaces \
+```bash
+awslocal servicediscovery list-namespaces \
--filters "Name=HTTP_NAME,Values=['example-namespace'],Condition=EQ"
-{{< /command >}}
+```
-{{< command >}}
-$ awslocal servicediscovery list-services \
+```bash
+awslocal servicediscovery list-services \
--filters "Name=NAMESPACE_ID,Values=['id_to_match']"
-{{< /command >}}
+```
The command `discover-instance` supports parameters and optional parameters as filter criteria.
Conditions in parameters must match return values, while if one ore more conditions in optional parameters match, the subset is returned, if no conditions in optional parameters match, all unfiltered results are returned.
This command will only return instances where the parameter `env` is equal to `fuu`:
-{{< command >}}
-$ awslocal servicediscovery discover-instances \
+
+```bash
+awslocal servicediscovery discover-instances \
--namespace-name example-namespace \
--service-name example-service \
--query-parameters "env"="fuu"
-{{< /command >}}
+```
This command instead will return all instances where the optional parameter `env` is equal to `bar`, but if no instances match, all instances are returned:
-{{< command >}}
-$ awslocal servicediscovery discover-instances \
+
+```bash
+awslocal servicediscovery discover-instances \
--namespace-name example-namespace \
--service-name example-service \
--optional-parameters "env"="bar"
-{{< /command >}}
+```
diff --git a/src/content/docs/aws/services/ses.md b/src/content/docs/aws/services/ses.md
index f4509be4..bff9e9a5 100644
--- a/src/content/docs/aws/services/ses.md
+++ b/src/content/docs/aws/services/ses.md
@@ -1,6 +1,5 @@
---
title: "Simple Email Service (SES)"
-linkTitle: "Simple Email Service (SES)"
description: Get started with Amazon Simple Email Service (SES) on LocalStack
tags: ["Free", "Base"]
persistence: supported
@@ -11,12 +10,12 @@ persistence: supported
Simple Email Service (SES) is an emailing service that can be integrated with other cloud-based services.
It provides API to facilitate email templating, sending bulk emails and more.
-The supported APIs are available on the API coverage page for [SESv1]({{< ref "coverage_ses" >}}) and [SESv2]({{< ref "coverage_sesv2" >}}).
+The supported APIs are available on the API coverage page for [SESv1](), and [SESv2]().
-{{< callout "Note" >}}
+:::note
Users on Free plan can use SES V1 APIs in LocalStack for basic mocking and testing.
For advanced features like SMTP integration and other emulation capabilities, please refer to the Ultimate plan.
-{{< /callout >}}
+:::
## Getting Started
@@ -30,38 +29,43 @@ A verified identity appears as part of the 'From' field in the sent email.
A singular email identity can be added using the `VerifyEmailIdentity` operation.
-{{< command >}}
-$ awslocal ses verify-email-identity --email hello@example.com
+```bash
+awslocal ses verify-email-identity --email hello@example.com
-$ awslocal ses list-identities
+awslocal ses list-identities
{
"Identities": [
"hello@example.com"
]
}
-{{< /command >}}
+```
-{{< callout >}}
+:::note
On AWS, verifying email identities or domain identities require additional steps like changing DNS configuration or clicking verification links respectively.
In LocalStack, identities are automatically verified.
-{{< /callout >}}
+:::
Next, emails can be sent using the `SendEmail` operation.
-{{< command >}}
-$ awslocal ses send-email \
+```bash
+awslocal ses send-email \
--from "hello@example.com" \
--message 'Body={Text={Data="This is the email body"}},Subject={Data="This is the email subject"}' \
--destination 'ToAddresses=jeff@aws.com'
+```
+
+The following output is displayed:
+
+```bash
{
"MessageId": "labpqxukegeaftfh-ymaouvvy-ribr-qeoy-izfp-kxaxbfcfsgbh-wpewvd"
}
-{{< /command >}}
+```
-{{< callout >}}
+:::note
In LocalStack Community, all operations are mocked and no real emails are sent.
In LocalStack Pro, it is possible to send real emails via an SMTP server.
-{{< /callout >}}
+:::
## Retrieve Sent Emails
@@ -70,56 +74,61 @@ Sent messages can be retrieved in following ways:
- **API endpoint:** LocalStack provides a service endpoint (`/_aws/ses`) which can be used to return in-memory saved messages.
A `GET` call returns all messages.
Query parameters `id` and `email` can be used to filter by message ID and message source respectively.
- {{< command >}}
-$ curl --silent localhost.localstack.cloud:4566/_aws/ses?email=hello@example.com | jq .
-{
- "messages": [
+
+ ```bash
+ curl --silent localhost.localstack.cloud:4566/_aws/ses?email=hello@example.com | jq .
+ ```
+
+ The following output is displayed:
+
+ ```bash
{
- "Id": "dqxhhgoutkmylpbc-ffuqlkjs-ljld-fckp-hcph-wcsrkmxhhldk-pvadjc",
- "Region": "eu-central-1",
- "Destination": {
- "ToAddresses": [
- "jeff@aws.com"
- ]
- },
- "Source": "hello@example.com",
- "Subject": "This is the email subject",
- "Body": {
- "text_part": "This is the email body",
- "html_part": null
- },
- "Timestamp": "2023-09-11T08:37:13"
+ "messages": [
+ {
+ "Id": "dqxhhgoutkmylpbc-ffuqlkjs-ljld-fckp-hcph-wcsrkmxhhldk-pvadjc",
+ "Region": "eu-central-1",
+ "Destination": {
+ "ToAddresses": [
+ "jeff@aws.com"
+ ]
+ },
+ "Source": "hello@example.com",
+ "Subject": "This is the email subject",
+ "Body": {
+ "text_part": "This is the email body",
+ "html_part": null
+ },
+ "Timestamp": "2023-09-11T08:37:13"
+ }
+ ]
}
- ]
-}
- {{< /command >}}
+ ```
A `DELETE` call clears all messages from the memory.
The query parameter `id` can be used to delete only a specific message.
- {{< command >}}
- $ curl -X DELETE localhost.localstack.cloud:4566/_aws/ses?id=dqxhhgoutkmylpbc-ffuqlkjs-ljld-fckp-hcph-wcsrkmxhhldk-pvadjc
- {{< /command >}}
-- **Filesystem:** All messages are saved to the state directory (see [filesystem layout]({{< ref "filesystem" >}})).
+
+ ```bash
+ curl -X DELETE localhost.localstack.cloud:4566/_aws/ses?id=dqxhhgoutkmylpbc-ffuqlkjs-ljld-fckp-hcph-wcsrkmxhhldk-pvadjc
+ ```
+- **Filesystem:** All messages are saved to the state directory (see [filesystem layout](/aws/capabilities/config/filesystem)).
The files are saved as JSON in the `ses/` subdirectory and named by the message ID.
## SMTP Integration
LocalStack Pro supports sending emails via an SMTP server.
To enable this, set the connections parameters and access credentials for the server in the configuration.
-Refer to the [Configuration]({{< ref "configuration#emails" >}}) guide for details.
+Refer to the [Configuration](/aws/capabilities/config/configuration/#emails) guide for details.
-{{< callout "tip" >}}
+:::note
If you do not have access to a live SMTP server, you can use tools like [MailDev](https://github.com/maildev/maildev) or [smtp4dev](https://github.com/rnwood/smtp4dev).
These run as Docker containers on your local machine.
Make sure they run in the same Docker network as the LocalStack container.
-{{< /callout >}}
+:::
## Resource Browser
LocalStack Web Application provides a resource browser for managing email identities and introspecing sent emails.
-
-
-
+
The Resource Browser allows you to perform following actions:
- **Create Email Identity**: Create an email identity by clicking **Create Identity** and specifying the email address.
diff --git a/src/content/docs/aws/services/shield.md b/src/content/docs/aws/services/shield.md
index 3b32f837..92fffcd5 100644
--- a/src/content/docs/aws/services/shield.md
+++ b/src/content/docs/aws/services/shield.md
@@ -1,6 +1,5 @@
---
title: "Shield"
-linkTitle: "Shield"
description: Get started with Shield on LocalStack
tags: ["Ultimate"]
---
@@ -12,7 +11,7 @@ Shield provides always-on detection and inline mitigations that minimize applica
Shield detection and mitigation is designed to protect against threats, including ones that are not known to the service at the time of detection.
LocalStack allows you to use the Shield APIs in your local environment, and provides a simple way to mock and test the Shield service locally.
-The supported APIs are available on our [API coverage page]({{< ref "coverage_shield" >}}), which provides information on the extent of Shield's integration with LocalStack.
+The supported APIs are available on our [API coverage page](), which provides information on the extent of Shield's integration with LocalStack.
## Getting Started
@@ -26,11 +25,11 @@ We will demonstrate how to create a Shield protection, list all protections, and
To create a Shield protection, use the [`CreateProtection`](https://docs.aws.amazon.com/cli/latest/reference/shield/create-protection.html) API.
The following command creates a Shield protection for a resource:
-{{< command >}}
-$ awslocal shield create-protection \
+```bash
+awslocal shield create-protection \
--name "my-protection" \
--resource-arn "arn:aws:elasticloadbalancing:us-east-1:000000000000:loadbalancer/app/my-alb/1234567890"
-{{< /command >}}
+```
The output should look similar to the following:
@@ -45,9 +44,9 @@ The output should look similar to the following:
To list all Shield protections, use the [`ListProtections`](https://docs.aws.amazon.com/cli/latest/reference/shield/list-protections.html) API.
The following command lists all Shield protections:
-{{< command >}}
-$ awslocal shield list-protections
-{{< /command >}}
+```bash
+awslocal shield list-protections
+```
The output should look similar to the following:
@@ -69,10 +68,10 @@ The output should look similar to the following:
To describe a Shield protection, use the [`DescribeProtection`](https://docs.aws.amazon.com/cli/latest/reference/shield/describe-protection.html) API.
The following command describes a Shield protection:
-{{< command >}}
-$ awslocal shield describe-protection \
+```bash
+awslocal shield describe-protection \
--protection-id "67908d33-16c0-443d-820a-31c02c4d5976"
-{{< /command >}}
+```
Replace the protection ID with the ID of the protection you want to describe.
The output should look similar to the following:
@@ -93,10 +92,10 @@ The output should look similar to the following:
To delete a Shield protection, use the [`DeleteProtection`](https://docs.aws.amazon.com/cli/latest/reference/shield/delete-protection.html) API.
The following command deletes a Shield protection:
-{{< command >}}
-$ awslocal shield delete-protection \
+```bash
+awslocal shield delete-protection \
--protection-id "67908d33-16c0-443d-820a-31c02c4d5976"
-{{< /command >}}
+```
## Current Limitations
diff --git a/src/content/docs/aws/services/sns.md b/src/content/docs/aws/services/sns.md
index 7b091368..e052249f 100644
--- a/src/content/docs/aws/services/sns.md
+++ b/src/content/docs/aws/services/sns.md
@@ -1,6 +1,5 @@
---
title: "Simple Notification Service (SNS)"
-linkTitle: "Simple Notification Service (SNS)"
description: Get started with Simple Notification Service (SNS) on LocalStack
persistence: supported
tags: ["Free"]
@@ -12,7 +11,7 @@ Simple Notification Service (SNS) is a serverless messaging service that can dis
SNS employs the Publish/Subscribe, an asynchronous messaging pattern that decouples services that produce events from services that process events.
LocalStack allows you to use the SNS APIs in your local environment to coordinate the delivery of messages to subscribing endpoints or clients.
-The supported APIs are available on our [API coverage page]({{< ref "coverage_sns" >}}), which provides information on the extent of SNS's integration with LocalStack.
+The supported APIs are available on our [API coverage page](), which provides information on the extent of SNS's integration with LocalStack.
## Getting started
@@ -27,68 +26,68 @@ We will demonstrate how to create an SNS topic, publish messages, and subscribe
To create an SNS topic, use the [`CreateTopic`](https://docs.aws.amazon.com/sns/latest/api/API_CreateTopic.html) API.
Run the following command to create a topic named `localstack-topic`:
-{{< command >}}
-$ awslocal sns create-topic --name localstack-topic
-{{< /command >}}
+```bash
+awslocal sns create-topic --name localstack-topic
+```
You can set the SNS topic attribute using the SNS topic you created previously by using the [`SetTopicAttributes`](https://docs.aws.amazon.com/sns/latest/api/API_SetTopicAttributes.html) API.
Run the following command to set the `DisplayName` attribute for the topic:
-{{< command >}}
-$ awslocal sns set-topic-attributes \
+```bash
+awslocal sns set-topic-attributes \
--topic-arn arn:aws:sns:us-east-1:000000000000:localstack-topic \
--attribute-name DisplayName \
--attribute-value MyTopicDisplayName
-{{< /command >}}
+```
You can list all the SNS topics using the [`ListTopics`](https://docs.aws.amazon.com/sns/latest/api/API_ListTopics.html) API.
Run the following command to list all the SNS topics:
-{{< command >}}
-$ awslocal sns list-topics
-{{< /command >}}
+```bash
+awslocal sns list-topics
+```
### Get attributes and publish messages to SNS topic
You can get attributes for a single SNS topic using the [`GetTopicAttributes`](https://docs.aws.amazon.com/sns/latest/api/API_GetTopicAttributes.html) API.
Run the following command to get the attributes for the SNS topic:
-{{< command >}}
-$ awslocal sns get-topic-attributes \
+```bash
+awslocal sns get-topic-attributes \
--topic-arn arn:aws:sns:us-east-1:000000000000:localstack-topic
-{{< /command >}}
+```
You can change the `topic-arn` to the ARN of the SNS topic you created previously.
To publish messages to the SNS topic, create a new file named `messages.txt` in your current directory and add some content.
Run the following command to publish messages to the SNS topic using the [`Publish`](https://docs.aws.amazon.com/sns/latest/api/API_Publish.html) API:
-{{< command >}}
-$ awslocal sns publish \
+```bash
+awslocal sns publish \
--topic-arn "arn:aws:sns:us-east-1:000000000000:localstack-topic" \
--message file://message.txt
-{{< /command >}}
+```
### Subscribing to SNS topics and setting subscription attributes
You can subscribe to the SNS topic using the [`Subscribe`](https://docs.aws.amazon.com/sns/latest/api/API_Subscribe.html) API.
Run the following command to subscribe to the SNS topic:
-{{< command >}}
-$ awslocal sns subscribe \
+```bash
+awslocal sns subscribe \
--topic-arn arn:aws:sns:us-east-1:000000000000:localstack-topic \
--protocol email \
--notification-endpoint test@gmail.com
-{{< /command >}}
+```
You can configure the SNS Subscription attributes, using the `SubscriptionArn` returned by the previous step.
For example, run the following command to set the `RawMessageDelivery` attribute for the subscription:
-{{< command >}}
-$ awslocal sns set-subscription-attributes \
+```bash
+awslocal sns set-subscription-attributes \
--subscription-arn arn:aws:sns:us-east-1:000000000000:test-topic:b6f5e924-dbb3-41c9-aa3b-589dbae0cfff \
--attribute-name RawMessageDelivery --attribute-value true
-{{< /command >}}
+```
### Working with SQS subscriptions for SNS
@@ -96,32 +95,54 @@ The getting started covers email subscription, but SNS can integrate with many A
A Common technology to integrate with is SQS.
First we need to ensure we create an SQS queue named `my-queue`:
-{{< command >}}
-$ awslocal sqs create-queue --queue-name my-queue
+
+```bash
+awslocal sqs create-queue --queue-name my-queue
+```
+
+The following output is displayed:
+
+```bash
{
"QueueUrl": "http://sqs.us-east-1.localhost.localstack.cloud:4566/000000000000/my-queue"
}
-{{< /command >}}
+```
Subscribe the SQS queue to the topic we created previously:
-{{< command >}}
-$ awslocal sns subscribe --topic-arn "arn:aws:sns:us-east-1:000000000000:localstack-topic" --protocol sqs --notification-endpoint "arn:aws:sqs:us-east-1:000000000000:my-queue"
+
+```bash
+awslocal sns subscribe \
+ --topic-arn "arn:aws:sns:us-east-1:000000000000:localstack-topic" \
+ --protocol sqs \
+ --notification-endpoint "arn:aws:sqs:us-east-1:000000000000:my-queue"
+```
+
+The following output is displayed:
+
+```bash
{
"SubscriptionArn": "arn:aws:sns:us-east-1:000000000000:localstack-topic:636e2a73-0dda-4e09-9fdf-77f113d0edd8"
}
-{{< /command >}}
+```
Sending a message to the queue, via the topic
-{{< command >}}
+
+```bash
$ awslocal sns publish --topic-arn "arn:aws:sns:us-east-1:000000000000:localstack-topic" --message "hello"
{
"MessageId": "5a1593ce-411b-44dc-861d-907daa05353b"
}
-{{< /command >}}
+```
Check that our message has arrived:
-{{< command >}}
-$ awslocal sqs receive-message --queue-url "http://sqs.us-east-1.localhost.localstack.cloud:4566/000000000000/my-queue"
+
+```bash
+awslocal sqs receive-message \
+ --queue-url "http://sqs.us-east-1.localhost.localstack.cloud:4566/000000000000/my-queue"
+```
+
+The following output is displayed:
+
{
"Messages": [
{
@@ -132,15 +153,19 @@ $ awslocal sqs receive-message --queue-url "http://sqs.us-east-1.localhost.local
}
]
}
-
-{{< /command >}}
+```
To remove the subscription you need the subscription ARN which you can find by listing the subscriptions.
You can list all the SNS subscriptions using the [`ListSubscriptions`](https://docs.aws.amazon.com/sns/latest/api/API_ListSubscriptions.html) API.
Run the following command to list all the SNS subscriptions:
-{{< command >}}
-$ awslocal sns list-subscriptions
+```bash
+awslocal sns list-subscriptions
+```
+
+The following output is displayed:
+
+```bash
{
"Subscriptions": [
{
@@ -152,12 +177,14 @@ $ awslocal sns list-subscriptions
}
]
}
-{{< /command >}}
+```
Then, use the ARN to unsubscribe
-{{< command >}}
-$ awslocal sns unsubscribe --subscription-arn "arn:aws:sns:us-east-1:000000000000:localstack-topic:636e2a73-0dda-4e09-9fdf-77f113d0edd8"
-{{< /command >}}
+
+```bash
+awslocal sns unsubscribe \
+ --subscription-arn "arn:aws:sns:us-east-1:000000000000:localstack-topic:636e2a73-0dda-4e09-9fdf-77f113d0edd8"
+```
## Developer endpoints
@@ -193,9 +220,13 @@ You can also call `DELETE /_aws/sns/platform-endpoint-messages` to clear the mes
In this example, we will create a platform endpoint in SNS and publish a message to it.
Run the following commands to create a platform endpoint:
-{{< command >}}
-$ awslocal sns create-platform-application --name app-test --platform APNS --attributes {}
-{{< /command >}}
+```bash
+awslocal sns create-platform-application \
+ --name app-test \
+ --platform APNS \
+ --attributes {}
+```
+
An example response is shown below:
```json
@@ -205,9 +236,14 @@ An example response is shown below:
```
Using the `PlatformApplicationArn` from the previous call:
-{{< command >}}
-$ awslocal sns create-platform-endpoint --platform-application-arn "arn:aws:sns:us-east-1:000000000000:app/APNS/app-test" --token my-fake-token
-{{< /command >}}
+
+```bash
+awslocal sns create-platform-endpoint \
+ --platform-application-arn "arn:aws:sns:us-east-1:000000000000:app/APNS/app-test" \
+ --token my-fake-token
+```
+
+The following output is displayed:
```json
{
@@ -217,9 +253,14 @@ $ awslocal sns create-platform-endpoint --platform-application-arn "arn:aws:sns:
Publish a message to the platform endpoint:
-{{< command >}}
-$ awslocal sns publish --target-arn "arn:aws:sns:us-east-1:000000000000:endpoint/APNS/app-test/c25f353e-856b-4b02-a725-6bde35e6e944" --message '{"APNS_PLATFORM": "{\"aps\": {\"content-available\": 1}}"}' --message-structure json
-{{< /command >}}
+```bash
+awslocal sns publish \
+ --target-arn "arn:aws:sns:us-east-1:000000000000:endpoint/APNS/app-test/c25f353e-856b-4b02-a725-6bde35e6e944" \
+ --message '{"APNS_PLATFORM": "{\"aps\": {\"content-available\": 1}}"}' \
+ --message-structure json
+```
+
+The following output is displayed:
```json
{
@@ -229,9 +270,11 @@ $ awslocal sns publish --target-arn "arn:aws:sns:us-east-1:000000000000:endpoint
Retrieve the messages published to the platform endpoint using [curl](https://curl.se/):
-{{< command >}}
-$ curl "http://localhost:4566/_aws/sns/platform-endpoint-messages" | jq .
-{{< /command >}}
+```bash
+curl "http://localhost:4566/_aws/sns/platform-endpoint-messages" | jq .
+```
+
+The following output is displayed:
```json
{
@@ -253,13 +296,17 @@ $ curl "http://localhost:4566/_aws/sns/platform-endpoint-messages" | jq .
With those same filters, you can reset the saved messages at `DELETE /_aws/sns/platform-endpoint-messages`.
Run the following command to reset the saved messages:
-{{< command >}}
-$ curl -X "DELETE" "http://localhost:4566/_aws/sns/platform-endpoint-messages"
-{{< /command >}}
+```bash
+curl -X "DELETE" "http://localhost:4566/_aws/sns/platform-endpoint-messages"
+```
+
We can now check that the messages have been properly deleted:
-{{< command >}}
-$ curl "http://localhost:4566/_aws/sns/platform-endpoint-messages" | jq .
-{{< /command >}}
+
+```bash
+curl "http://localhost:4566/_aws/sns/platform-endpoint-messages" | jq .
+```
+
+The following output is displayed:
```json
{
@@ -298,9 +345,12 @@ In this example, we will publish a message to a phone number and retrieve it:
Publish a message to a phone number:
-{{< command >}}
-$ awslocal sns publish --phone-number "" --message "Hello World!"
-{{< /command >}}
+```bash
+awslocal sns publish \
+ --phone-number "" \
+ --message "Hello World!"
+```
+
An example response is shown below:
```json
@@ -311,9 +361,11 @@ An example response is shown below:
Retrieve the message published using [curl](https://curl.se/) and [jq](https://jqlang.github.io/jq/):
-{{< command >}}
-$ curl "http://localhost:4566/_aws/sns/sms-messages" | jq .
-{{< /command >}}
+```bash
+curl "http://localhost:4566/_aws/sns/sms-messages" | jq .
+```
+
+The following output is displayed:
```json
{
@@ -339,13 +391,17 @@ You can reset the saved messages at `DELETE /_aws/sns/sms-messages`.
Using the query parameters, you can also selectively reset messages only in one region or from one phone number.
Run the following command to reset the saved messages:
-{{< command >}}
-$ curl -X "DELETE" "http://localhost:4566/_aws/sns/sms-messages"
-{{< /command >}}
+```bash
+curl -X "DELETE" "http://localhost:4566/_aws/sns/sms-messages"
+```
+
We can now check that the messages have been properly deleted:
-{{< command >}}
-$ curl "http://localhost:4566/_aws/sns/sms-messages" | jq .
-{{< /command >}}
+
+```bash
+curl "http://localhost:4566/_aws/sns/sms-messages" | jq .
+```
+
+The following output is displayed:
```json
{
@@ -388,9 +444,11 @@ In this example, we will subscribe to an external SNS integration not confirming
Create an SNS topic, and create a subscription to a external HTTP SNS integration:
-{{< command >}}
+```bash
awslocal sns create-topic --name "test-external-integration"
-{{< /command >}}
+```
+
+The following output is displayed:
```json
{
@@ -399,9 +457,16 @@ awslocal sns create-topic --name "test-external-integration"
```
We now create an HTTP SNS subscription to an external endpoint:
-{{< command >}}
-awslocal sns subscribe --topic-arn "arn:aws:sns:us-east-1:000000000000:test-external-integration" --protocol https --notification-endpoint "https://api.opsgenie.com/v1/json/amazonsns?apiKey=b13fd59a-9" --return-subscription-arn
-{{< /command >}}
+
+```bash
+awslocal sns subscribe \
+ --topic-arn "arn:aws:sns:us-east-1:000000000000:test-external-integration" \
+ --protocol https \
+ --notification-endpoint "https://api.opsgenie.com/v1/json/amazonsns?apiKey=b13fd59a-9" \
+ --return-subscription-arn
+```
+
+The following output is displayed:
```json
{
@@ -411,9 +476,13 @@ awslocal sns subscribe --topic-arn "arn:aws:sns:us-east-1:000000000000:test-exte
Now, we can check the `PendingConfirmation` status of our subscription, showing our endpoint did not confirm the subscription.
You will need to use the `SubscriptionArn` from the response of your subscribe call:
-{{< command >}}
-awslocal sns get-subscription-attributes --subscription-arn "arn:aws:sns:us-east-1:000000000000:test-external-integration:c3ab47f3-b964-461d-84eb-903d8765b0c8"
-{{< /command >}}
+
+```bash
+awslocal sns get-subscription-attributes \
+ --subscription-arn "arn:aws:sns:us-east-1:000000000000:test-external-integration:c3ab47f3-b964-461d-84eb-903d8765b0c8"
+```
+
+The following output is displayed:
```json
{
@@ -431,9 +500,12 @@ awslocal sns get-subscription-attributes --subscription-arn "arn:aws:sns:us-east
```
To manually confirm the subscription, we will fetch its token with our developer endpoint:
-{{< command >}}
+
+```bash
curl "http://localhost:4566/_aws/sns/subscription-tokens/arn:aws:sns:us-east-1:000000000000:test-external-integration:c3ab47f3-b964-461d-84eb-903d8765b0c8" | jq .
-{{< /command >}}
+```
+
+The following output is displayed:
```json
{
@@ -443,9 +515,14 @@ curl "http://localhost:4566/_aws/sns/subscription-tokens/arn:aws:sns:us-east-1:0
```
We can now use this token to manually confirm the subscription:
-{{< command >}}
-awslocal sns confirm-subscription --topic-arn "arn:aws:sns:us-east-1:000000000000:test-external-integration" --token 75732d656173742d312f3b875fb03b875fb03b875fb03b875fb03b875fb03b87
-{{< /command >}}
+
+```bash
+awslocal sns confirm-subscription \
+ --topic-arn "arn:aws:sns:us-east-1:000000000000:test-external-integration" \
+ --token 75732d656173742d312f3b875fb03b875fb03b875fb03b875fb03b875fb03b87
+```
+
+The following output is displayed:
```json
{
@@ -454,9 +531,13 @@ awslocal sns confirm-subscription --topic-arn "arn:aws:sns:us-east-1:00000000000
```
We can now finally verify the subscription has been confirmed:
-{{< command >}}
-awslocal sns get-subscription-attributes --subscription-arn "arn:aws:sns:us-east-1:000000000000:test-external-integration:c3ab47f3-b964-461d-84eb-903d8765b0c8"
-{{< /command >}}
+
+```bash
+awslocal sns get-subscription-attributes \
+ --subscription-arn "arn:aws:sns:us-east-1:000000000000:test-external-integration:c3ab47f3-b964-461d-84eb-903d8765b0c8"
+```
+
+The following output is displayed:
```json
{
@@ -481,7 +562,7 @@ SNS will now publish messages to your HTTP endpoint, even if it did not confirm
The LocalStack Web Application provides a Resource Browser for managing SNS topics.
You can access the Resource Browser by opening the LocalStack Web Application in your browser, navigating to the **Resources** section, and then clicking on **SNS** under the **App Integration** section.
-
+
The Resource Browser allows you to perform the following actions:
diff --git a/src/content/docs/aws/services/sqs.md b/src/content/docs/aws/services/sqs.mdx
similarity index 86%
rename from src/content/docs/aws/services/sqs.md
rename to src/content/docs/aws/services/sqs.mdx
index 0b46990b..a90b4929 100644
--- a/src/content/docs/aws/services/sqs.md
+++ b/src/content/docs/aws/services/sqs.mdx
@@ -1,8 +1,6 @@
---
title: "Simple Queue Service (SQS)"
description: Get started with Simple Queue Service (SQS) on LocalStack
-aliases:
-- /aws/sqs/
persistence: supported
tags: ["Free"]
---
@@ -14,7 +12,7 @@ It allows you to decouple different components of your applications by enabling
SQS allows you to reliably send, store, and receive messages with support for standard and FIFO queues.
LocalStack allows you to use the SQS APIs in your local environment to integrate and decouple distributed systems via hosted queues.
-The supported APIs are available on our [API coverage page]({{< ref "coverage_sqs" >}}), which provides information on the extent of SQS's integration with LocalStack.
+The supported APIs are available on our [API coverage page](), which provides information on the extent of SQS's integration with LocalStack.
## Getting started
@@ -28,16 +26,16 @@ We will demonstrate how to create an SQS queue, retrieve queue attributes and UR
To create an SQS queue, use the [`CreateQueue`](https://docs.aws.amazon.com/AWSSimpleQueueService/latest/APIReference/API_CreateQueue.html) API.
Run the following command to create a queue named `localstack-queue`:
-{{< command >}}
-$ awslocal sqs create-queue --queue-name localstack-queue
-{{< / command >}}
+```bash
+awslocal sqs create-queue --queue-name localstack-queue
+```
You can list all queues in your account using the [`ListQueues`](https://docs.aws.amazon.com/AWSSimpleQueueService/latest/APIReference/API_ListQueues.html) API.
Run the following command to list all queues in your account:
-{{< command >}}
-$ awslocal sqs list-queues
-{{< / command >}}
+```bash
+awslocal sqs list-queues
+```
You will see the following output:
@@ -54,9 +52,11 @@ You need to pass the `queue-url` and `attribute-names` parameters.
Run the following command to retrieve the queue attributes:
-{{< command >}}
-$ awslocal sqs get-queue-attributes --queue-url http://sqs.us-east-1.localhost.localstack.cloud:4566/000000000000/localstack-queue --attribute-names All
-{{< / command >}}
+```bash
+awslocal sqs get-queue-attributes \
+ --queue-url http://sqs.us-east-1.localhost.localstack.cloud:4566/000000000000/localstack-queue \
+ --attribute-names All
+```
### Sending and receiving messages from the queue
@@ -65,9 +65,11 @@ To send a message to a SQS queue, you can use the [`SendMessage`](https://docs.a
Run the following command to send a message to the queue:
-{{< command >}}
-$ awslocal sqs send-message --queue-url http://sqs.us-east-1.localhost.localstack.cloud:4566/000000000000/localstack-queue --message-body "Hello World"
-{{< / command >}}
+```bash
+awslocal sqs send-message \
+ --queue-url http://sqs.us-east-1.localhost.localstack.cloud:4566/000000000000/localstack-queue \
+ --message-body "Hello World"
+```
It will return the MD5 hash of the Message Body and a Message ID.
You will see output similar to the following:
@@ -82,9 +84,10 @@ You will see output similar to the following:
You can receive messages from the queue using the [`ReceiveMessage`](https://docs.aws.amazon.com/AWSSimpleQueueService/latest/APIReference/API_ReceiveMessage.html) API.
Run the following command to receive messages from the queue:
-{{< command >}}
-$ awslocal sqs receive-message --queue-url http://sqs.us-east-1.localhost.localstack.cloud:4566/000000000000/localstack-queue
-{{< / command >}}
+```bash
+awslocal sqs receive-message \
+ --queue-url http://sqs.us-east-1.localhost.localstack.cloud:4566/000000000000/localstack-queue
+```
You will see the Message ID, MD5 hash of the Message Body, Receipt Handle, and the Message Body in the output.
@@ -95,18 +98,21 @@ You need to pass the `queue-url` and `receipt-handle` parameters.
Run the following command to delete a message from the queue:
-{{< command >}}
-$ awslocal sqs delete-message --queue-url http://sqs.us-east-1.localhost.localstack.cloud:4566/000000000000/localstack-queue --receipt-handle
-{{< / command >}}
+```bash
+awslocal sqs delete-message \
+ --queue-url http://sqs.us-east-1.localhost.localstack.cloud:4566/000000000000/localstack-queue \
+ --receipt-handle
+```
Replace `` with the receipt handle you received in the previous step.
If you have sent multiple messages to the queue, you can purge the queue using the [`PurgeQueue`](https://docs.aws.amazon.com/AWSSimpleQueueService/latest/APIReference/API_PurgeQueue.html) API.
Run the following command to purge the queue:
-{{< command >}}
-$ awslocal sqs purge-queue --queue-url http://sqs.us-east-1.localhost.localstack.cloud:4566/000000000000/localstack-queue
-{{< / command >}}
+```bash
+awslocal sqs purge-queue \
+ --queue-url http://sqs.us-east-1.localhost.localstack.cloud:4566/000000000000/localstack-queue
+```
## Dead-letter queue testing
@@ -115,10 +121,15 @@ Here's an end-to-end example of how to use message move tasks to test DLQ redriv
First, create three queues.
One will serve as original input queue, one as DLQ, and the third as target for DLQ redrive.
-{{< command >}}
-$ awslocal sqs create-queue --queue-name input-queue
-$ awslocal sqs create-queue --queue-name dead-letter-queue
-$ awslocal sqs create-queue --queue-name recovery-queue
+```bash
+awslocal sqs create-queue --queue-name input-queue
+awslocal sqs create-queue --queue-name dead-letter-queue
+awslocal sqs create-queue --queue-name recovery-queue
+```
+
+The following output is displayed:
+
+```json
{
"QueueUrl": "http://sqs.us-east-1.localhost.localstack.cloud:4566/000000000000/input-queue"
}
@@ -128,27 +139,36 @@ $ awslocal sqs create-queue --queue-name recovery-queue
{
"QueueUrl": "http://sqs.us-east-1.localhost.localstack.cloud:4566/000000000000/recovery-queue"
}
-{{< /command >}}
+```
Configure `dead-letter-queue` to be a DLQ for `input-queue`:
-{{< command >}}
-$ awslocal sqs set-queue-attributes \
---queue-url http://sqs.us-east-1.localhost.localstack.cloud:4566/000000000000/input-queue \
---attributes '{
+
+```bash
+awslocal sqs set-queue-attributes \
+ --queue-url http://sqs.us-east-1.localhost.localstack.cloud:4566/000000000000/input-queue \
+ --attributes '{
"RedrivePolicy": "{\"deadLetterTargetArn\":\"arn:aws:sqs:us-east-1:000000000000:dead-letter-queue\",\"maxReceiveCount\":\"1\"}"
}'
-{{< /command >}}
+```
Send a message to the input queue:
-{{< command >}}
-$ awslocal sqs send-message --queue-url http://sqs.us-east-1.localhost.localstack.cloud:4566/000000000000/input-queue --message-body '{"hello": "world"}'
-{{< /command >}}
+
+```bash
+awslocal sqs send-message \
+ --queue-url http://sqs.us-east-1.localhost.localstack.cloud:4566/000000000000/input-queue \
+ --message-body '{"hello": "world"}'
+```
Receive the message twice to provoke a move into the dead-letter queue:
-{{< command >}}
-$ awslocal sqs receive-message --visibility-timeout 0 --queue-url http://sqs.us-east-1.localhost.localstack.cloud:4566/000000000000/input-queue
-$ awslocal sqs receive-message --visibility-timeout 0 --queue-url http://sqs.us-east-1.localhost.localstack.cloud:4566/000000000000/input-queue
-{{< /command >}}
+
+```bas
+awslocal sqs receive-message \
+ --visibility-timeout 0 \
+ --queue-url http://sqs.us-east-1.localhost.localstack.cloud:4566/000000000000/input-queue
+awslocal sqs receive-message \
+ --visibility-timeout 0 \
+ --queue-url http://sqs.us-east-1.localhost.localstack.cloud:4566/000000000000/input-queue
+```
In the localstack logs you should see something like the following line, indicating the message was moved to the DLQ:
@@ -157,15 +177,23 @@ In the localstack logs you should see something like the following line, indicat
```
Now, start a message move task to asynchronously move the messages from the DLQ into the recovery queue:
-{{< command >}}
-$ awslocal sqs start-message-move-task \
- --source-arn arn:aws:sqs:us-east-1:000000000000:dead-letter-queue \
- --destination-arn arn:aws:sqs:us-east-1:000000000000:recovery-queue
-{{< /command >}}
+
+```bash
+awslocal sqs start-message-move-task \
+ --source-arn arn:aws:sqs:us-east-1:000000000000:dead-letter-queue \
+ --destination-arn arn:aws:sqs:us-east-1:000000000000:recovery-queue
+```
Listing the message move tasks should yield something like
-{{< command >}}
-$ awslocal sqs list-message-move-tasks --source-arn arn:aws:sqs:us-east-1:000000000000:dead-letter-queue
+
+```bash
+awslocal sqs list-message-move-tasks \
+ --source-arn arn:aws:sqs:us-east-1:000000000000:dead-letter-queue
+```
+
+The following output is displayed:
+
+```json
{
"Results": [
{
@@ -178,10 +206,11 @@ $ awslocal sqs list-message-move-tasks --source-arn arn:aws:sqs:us-east-1:000000
}
]
}
-{{< /command >}}
+```
Receiving messages from the recovery queue should now show us the original message:
-{{< command >}}
+
+```bash
$ awslocal sqs receive-message --queue-url http://sqs.us-east-1.localhost.localstack.cloud:4566/000000000000/recovery-queue
{
"Messages": [
@@ -193,7 +222,7 @@ $ awslocal sqs receive-message --queue-url http://sqs.us-east-1.localhost.locals
}
]
}
-{{< /command >}}
+```
## SQS Query API
@@ -204,9 +233,9 @@ With LocalStack, you can conveniently test SQS Query API calls without the need
For instance, you can use a basic [curl](https://curl.se/) command to send a `SendMessage` command along with a MessageBody attribute:
-{{< command >}}
-$ curl "http://sqs.us-east-1.localhost.localstack.cloud:4566/000000000000/localstack-queue?Action=SendMessage&MessageBody=hello%2Fworld"
-{{< / command >}}
+```bash
+curl "http://sqs.us-east-1.localhost.localstack.cloud:4566/000000000000/localstack-queue?Action=SendMessage&MessageBody=hello%2Fworld"
+```
You will see the following output:
@@ -229,9 +258,9 @@ Adding the `Accept: application/json` header will make the server return JSON:
To receive JSON responses from the server, include the `Accept: application/json` header in your request.
Here's an example using the [curl](https://curl.se/) command:
-{{< command >}}
-$ curl -H "Accept: application/json" "http://sqs.us-east-1.localhost.localstack.cloud:4566/000000000000/localstack-queue?Action=SendMessage&MessageBody=hello%2Fworld"
-{{< / command >}}
+```bash
+curl -H "Accept: application/json" "http://sqs.us-east-1.localhost.localstack.cloud:4566/000000000000/localstack-queue?Action=SendMessage&MessageBody=hello%2Fworld"
+```
The response will be in JSON format:
@@ -288,11 +317,11 @@ You can enable this behavior in LocalStack by setting the `SQS_ENABLE_MESSAGE_RE
In AWS, valid values for message retention range from 60 (1 minute) to 1,209,600 (14 days).
In LocalStack, we do not put constraints on the value which can be helpful for test scenarios.
-{{< callout >}}
-Note that, if you enable this option, [persistence]({{< ref "user-guide/state-management/persistence" >}}) or [cloud pods]({{}}) for SQS may not work as expected.
+:::note
+Note that, if you enable this option, [persistence](/aws/capabilities/state-management/persistence) or [cloud pods](/aws/capabilities/state-management/cloud-pods) for SQS may not work as expected.
The reason is that, LocalStack does not adjust timestamps when restoring a state, so time appears to pass between LocalStack runs.
Consequently, when you restart LocalStack after a period that is longer than the message retention period, LocalStack will remove all those messages when SQS starts.
-{{}}
+:::
### Disable CloudWatch Metrics Reporting
@@ -337,7 +366,7 @@ Our Lambda implementation automatically resolves these URLs to the LocalStack co
When your code run within different containers like ECS tasks or your custom ones, it's advisable to establish your Docker network setup.
You can follow these steps:
-1. Override the `LOCALSTACK_HOST` variable as outlined in our [network troubleshooting guide]({{< ref "endpoint-url" >}}).
+1. Override the `LOCALSTACK_HOST` variable as outlined in our [network troubleshooting guide]().
2. Ensure that your containers can resolve `LOCALSTACK_HOST` to the LocalStack container within the Docker network.
3. We recommend employing `SQS_ENDPOINT_STRATEGY=path`, which generates queue URLs in the format `http:///queue/...`.
@@ -359,24 +388,29 @@ The endpoint ignores any additional parameters from the `ReceiveMessage` operati
You can call the `/_aws/sqs/messages` endpoint in two different ways:
1. Using the query argument `QueueUrl`, like this:
- {{< command >}}
- $ http://localhost.localstack.cloud:4566/_aws/sqs/messages?QueueUrl=http://sqs.us-east-1.localhost.localstack.cloud:4566/000000000000/my-queue
- {{< / command >}}
+ ```bash
+ http://localhost.localstack.cloud:4566/_aws/sqs/messages?QueueUrl=http://sqs.us-east-1.localhost.localstack.cloud:4566/000000000000/my-queue
+ ```
2. Utilizing the path-based endpoint, as shown in this example:
- {{< command >}}
- $ http://localhost.localstack.cloud:4566/_aws/sqs/messages/us-east-1/000000000000/my-queue
- {{< / command >}}
+ ```bash
+ http://localhost.localstack.cloud:4566/_aws/sqs/messages/us-east-1/000000000000/my-queue
+ ```
#### XML response
You can directly call the endpoint to obtain the raw AWS XML response.
-{{< tabpane >}}
-{{< tab header="curl" lang="bash" >}}
+import { Tabs, TabItem } from '@astrojs/starlight/components';
+
+
+
+```bash
curl "http://localhost.localstack.cloud:4566/_aws/sqs/messages?QueueUrl=http://sqs.us-east-1.localhost.localstack.cloud:4566/000000000000/my-queue"
-{{< /tab >}}
-{{< tab header="Python Requests" lang="python" >}}
+```
+
+
+```python
import requests
response = requests.get(
@@ -384,8 +418,9 @@ response = requests.get(
params={"QueueUrl": "http://sqs.us-east-1.localhost.localstack.cloud:4566/000000000000/my-queue"},
)
print(response.text) # outputs the response XML
-{{< /tab >}}
-{{< / tabpane >}}
+```
+
+
An example response is shown below:
@@ -448,21 +483,25 @@ An example response is shown below:
You can include the `Accept: application/json` header in your request if you prefer a JSON response.
-{{< tabpane >}}
-{{< tab header="curl" lang="bash" >}}
+
+
+```bash
curl -H "Accept: application/json" \
"http://localhost.localstack.cloud:4566/_aws/sqs/messages?QueueUrl=http://sqs.us-east-1.localhost.localstack.cloud:4566/000000000000/my-queue"
-{{< /tab >}}
-{{< tab header="Python Requests" lang="python" >}}
+```
+
+
+```python
import requests
response = requests.get(
url="http://localhost.localstack.cloud:4566/_aws/sqs/messages",
- params={"QueueUrl": "http://sqs.us-east-1.localhost.localstack.cloud:4566/000000000000/my-queue""},
+ params={"QueueUrl": "http://sqs.us-east-1.localhost.localstack.cloud:4566/000000000000/my-queue"},
)
print(response.text) # outputs the response XML
-{{< /tab >}}
-{{< / tabpane >}}
+```
+
+
An example response is shown below:
@@ -532,18 +571,22 @@ An example response is shown below:
Since the `/_aws/sqs/messages` endpoint is compatible with the SQS `ReceiveMessage` operation, you can use the endpoint as the endpoint URL parameter in your AWS client call.
-{{< tabpane >}}
-{{< tab header="aws-cli" lang="bash" >}}
+
+
+```bash
aws --endpoint-url=http://localhost.localstack.cloud:4566/_aws/sqs/messages sqs receive-message \
--queue-url=http://sqs.us-east-1.localhost.localstack.cloud:4566/000000000000/my-queue
-{{< /tab >}}
-{{< tab header="Boto3" lang="python" >}}
+```
+
+
+```python
import boto3
sqs = boto3.client("sqs", endpoint_url="http://localhost.localstack.cloud:4566/_aws/sqs/messages")
response = sqs.receive_message(QueueUrl="http://sqs.us-east-1.localhost.localstack.cloud:4566/000000000000/my-queue")
print(response)
-{{< /tab >}}
-{{< / tabpane >}}
+```
+
+
An example response is shown below:
@@ -582,22 +625,25 @@ An example response is shown below:
The developer endpoint also supports showing invisible and delayed messages via the query arguments `ShowInvisible` and `ShowDelayed`.
-{{< tabpane >}}
-{{< tab header="curl" lang="bash" >}}
+
+
+```bash
curl -H "Accept: application/json" \
"http://localhost.localstack.cloud:4566/_aws/sqs/messages?ShowInvisible=true&ShowDelayed=true&QueueUrl=http://sqs.us-east-1.localhost.localstack.cloud:4566/000000000000/my-queue
-{{< /tab >}}
-{{< tab header="Python Requests" lang="python" >}}
+```
+
+
+```python
import requests
-
response = requests.get(
"http://localhost.localstack.cloud:4566/_aws/sqs/messages",
params={"QueueUrl": queue_url, "ShowInvisible": True, "ShowDelayed": True},
headers={"Accept": "application/json"},
)
print(response.text)
-{{< /tab >}}
-{{< / tabpane >}}
+```
+
+
This will also include messages that currently have an active visibility timeout or were delayed and are not actually in the queue yet.
Here's an example:
@@ -627,7 +673,7 @@ Here's an example:
The LocalStack Web Application provides a Resource Browser for managing SQS queues.
You can access the Resource Browser by opening the LocalStack Web Application in your browser, navigating to the **Resources** section, and then clicking on **SQS** under the **App Integration** section.
-
+
The Resource Browser allows you to perform the following actions:
diff --git a/src/content/docs/aws/services/ssm.md b/src/content/docs/aws/services/ssm.md
index 60560f38..0e513fc0 100644
--- a/src/content/docs/aws/services/ssm.md
+++ b/src/content/docs/aws/services/ssm.md
@@ -1,6 +1,5 @@
---
title: "Systems Manager (SSM)"
-linkTitle: "Systems Manager (SSM)"
description: Get started with Systems Manager (SSM) on LocalStack
tags: ["Free"]
persistence: supported
@@ -12,7 +11,7 @@ Systems Manager (SSM) is a management service provided by Amazon Web Services th
SSM simplifies tasks related to system and application management, patching, configuration, and automation, allowing you to maintain the health and compliance of your environment.
LocalStack allows you to use the SSM APIs in your local environment to run operational tasks on the Dockerized instances.
-The supported APIs are available on our [API coverage page]({{< ref "coverage_ssm" >}}), which provides information on the extent of SSM's integration with LocalStack.
+The supported APIs are available on our [API coverage page](), which provides information on the extent of SSM's integration with LocalStack.
## Getting started
@@ -27,20 +26,20 @@ To get started, pull the `ubuntu:focal` image from Docker Hub and tag it as `loc
LocalStack uses a naming scheme to recognise and manage the containers and images associated with it.
The container are named `localstack-ec2.`, while images are tagged `localstack-ec2/:`.
-{{< command >}}
-$ docker pull ubuntu:focal
-$ docker tag ubuntu:focal localstack-ec2/ubuntu-focal-docker-ami:ami-00a001
-{{< / command >}}
+```bash
+docker pull ubuntu:focal
+docker tag ubuntu:focal localstack-ec2/ubuntu-focal-docker-ami:ami-00a001
+```
LocalStack's Docker backend treats Docker images with the above naming scheme as AMIs.
The AMI ID is the last part of the image tag, `ami-00a001` in this case.
You can run an EC2 instance using the [`RunInstances`](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_RunInstances.html) API.
Execute the following command to create an EC2 instance using the `ami-00a001` AMI.
-{{< command >}}
-$ awslocal ec2 run-instances \
+```bash
+awslocal ec2 run-instances \
--image-id ami-00a001 --count 1
-{{< / command >}}
+```
The following output would be retrieved:
@@ -71,12 +70,12 @@ You can copy the `InstanceId` value and use it in the following commands.
You can use the [`SendCommand`](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_SendCommand.html) API to send a command to the EC2 instance.
The following command sends a `cat lsb-release` command in the `/etc` directory to the EC2 instance.
-{{< command >}}
-$ awslocal ssm send-command --document-name "AWS-RunShellScript" \
+```bash
+awslocal ssm send-command --document-name "AWS-RunShellScript" \
--document-version "1" \
--instance-ids i-abf6920789a06dd84 \
--parameters "commands='cat lsb-release',workingDirectory=/etc"
-{{< / command >}}
+```
The following output would be retrieved:
@@ -101,11 +100,11 @@ You can copy the `CommandId` value and use it in the following commands.
You can use the [`GetCommandInvocation`](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_GetCommandInvocation.html) API to retrieve the command output.
The following command retrieves the output of the command sent in the previous step.
-{{< command >}}
-$ awslocal ssm get-command-invocation \
+```bash
+awslocal ssm get-command-invocation \
--command-id 23547a9b-6993-4967-9446-f96b9b5dac70 \
--instance-id i-abf6920789a06dd84
-{{< / command >}}
+```
Change the `CommandId` and `InstanceId` values to the ones you received in the previous step.
The following output would be retrieved:
@@ -127,7 +126,7 @@ The following output would be retrieved:
The LocalStack Web Application provides a Resource Browser for managing SSM System Parameters.
You can access the Resource Browser by opening the LocalStack Web Application in your browser, navigating to the **Resource Browser** section, and then clicking on **Simple Systems Manager (SSM)** under the **Management/Governance** section.
-
+
The Resource Browser allows you to perform the following actions:
diff --git a/src/content/docs/aws/services/stepfunctions.md b/src/content/docs/aws/services/stepfunctions.mdx
similarity index 94%
rename from src/content/docs/aws/services/stepfunctions.md
rename to src/content/docs/aws/services/stepfunctions.mdx
index d5ae8d21..b8e2f136 100644
--- a/src/content/docs/aws/services/stepfunctions.md
+++ b/src/content/docs/aws/services/stepfunctions.mdx
@@ -1,9 +1,7 @@
---
title: "Step Functions"
-linkTitle: "Step Functions"
tags: ["Free"]
-description: >
- Get started with Step Functions on LocalStack
+description: Get started with Step Functions on LocalStack
---
## Introduction
@@ -13,7 +11,7 @@ It provides a JSON-based structured language called Amazon States Language (ASL)
Thus making it easier to build and maintain complex and distributed applications.
LocalStack allows you to use the Step Functions APIs in your local environment to create, execute, update, and delete state machines locally.
-The supported APIs are available on our [API coverage page]({{< ref "coverage_stepfunctions" >}}), which provides information on the extent of Step Function's integration with LocalStack.
+The supported APIs are available on our [API coverage page](), which provides information on the extent of Step Function's integration with LocalStack.
## Getting started
@@ -28,8 +26,8 @@ You can create a state machine using the [`CreateStateMachine`](https://docs.aws
The API requires the name of the state machine, the state machine definition, and the role ARN that the state machine will assume to call AWS services.
Run the following command to create a state machine:
-{{< command >}}
-$ awslocal stepfunctions create-state-machine \
+```bash
+awslocal stepfunctions create-state-machine \
--name "CreateAndListBuckets" \
--definition '{
"Comment": "Create bucket and list buckets",
@@ -51,7 +49,7 @@ $ awslocal stepfunctions create-state-machine \
}
}' \
--role-arn "arn:aws:iam::000000000000:role/stepfunctions-role"
-{{< /command >}}
+```
The output of the above command is the ARN of the state machine:
@@ -68,10 +66,10 @@ You can execute the state machine using the [`StartExecution`](https://docs.aws.
The API requires the state machine's ARN and the state machine's input.
Run the following command to execute the state machine:
-{{< command >}}
-$ awslocal stepfunctions start-execution \
+```bash
+awslocal stepfunctions start-execution \
--state-machine-arn "arn:aws:states:us-east-1:000000000000:stateMachine:CreateAndListBuckets"
-{{< /command >}}
+```
The output of the above command is the execution ARN:
@@ -87,10 +85,10 @@ The output of the above command is the execution ARN:
To check the status of the execution, you can use the [`DescribeExecution`](https://docs.aws.amazon.com/step-functions/latest/apireference/API_DescribeExecution.html) API.
Run the following command to describe the execution:
-{{< command >}}
-$ awslocal stepfunctions describe-execution \
+```bash
+awslocal stepfunctions describe-execution \
--execution-arn "arn:aws:states:us-east-1:000000000000:execution:CreateAndListBuckets:bf7d2138-e96f-42d1-b1f9-41f0c1c7bc3e"
-{{< /command >}}
+```
Replace the `execution-arn` with the ARN of the execution you want to describe.
@@ -161,10 +159,10 @@ LocalStack can also serve as a drop-in replacement for [AWS Step Functions Local
It supports test cases with mocked Task states and maintains compatibility with existing Step Functions Local configurations.
This functionality is extended in LocalStack by providing access to the latest Step Functions features such as [JSONata and Variables](https://blog.localstack.cloud/aws-step-functions-made-easy/), as well as the ability to enable both mocked and emulated service interactions emulated by LocalStack.
-{{< callout >}}
+:::note
LocalStack does not validate response formats.
Ensure the payload structure in the mocked responses matches what the real service expects.
-{{< /callout >}}
+:::
### Identify a State Machine for Mocked Integrations
@@ -287,9 +285,9 @@ In the example above:
- `Return`: Simulates a successful response by returning a predefined payload.
- `Throw`: Simulates a failure by returning an `Error` and an optional `Cause`.
-{{< callout >}}
+:::note
Each entry must have **either** `Return` or `Throw`, but cannot have both.
-{{< /callout >}}
+:::
Here is a complete example of the `MockedResponses` section:
@@ -390,12 +388,17 @@ Set the `SFN_MOCK_CONFIG` environment variable to the path of your mock configur
If you're running LocalStack in Docker, mount the file and pass the variable as shown below:
-{{< tabpane >}}
-{{< tab header="LocalStack CLI" lang="shell" >}}
+import { Tabs, TabItem } from '@astrojs/starlight/components';
+
+
+
+```bash
LOCALSTACK_SFN_MOCK_CONFIG=/tmp/MockConfigFile.json \
localstack start --volume /path/to/MockConfigFile.json:/tmp/MockConfigFile.json
-{{< /tab >}}
-{{< tab header="Docker Compose" lang="yaml" >}}
+```
+
+
+```yaml
services:
localstack:
container_name: "${LOCALSTACK_DOCKER_NAME:-localstack-main}"
@@ -411,8 +414,9 @@ services:
- "${LOCALSTACK_VOLUME_DIR:-./volume}:/var/lib/localstack"
- "/var/run/docker.sock:/var/run/docker.sock"
- "./MockConfigFile.json:/tmp/MockConfigFile.json"
-{{< /tab >}}
-{{< /tabpane >}}
+```
+
+
### Run Test Cases with Mocked Integrations
@@ -420,12 +424,12 @@ Create the state machine to match the name defined in the mock configuration fil
In this example, create the `LambdaSQSIntegration` state machine using:
-{{< command >}}
-$ awslocal stepfunctions create-state-machine \
+```bash
+awslocal stepfunctions create-state-machine \
--definition file://LambdaSQSIntegration.json \
--name "LambdaSQSIntegration" \
--role-arn "arn:aws:iam::000000000000:role/service-role/testrole"
-{{< /command >}}
+```
After the state machine is created and correctly named, you can run test cases defined in the mock configuration file using the [`StartExecution`](https://docs.aws.amazon.com/step-functions/latest/apireference/API_StartExecution.html) API.
@@ -435,22 +439,22 @@ This tells LocalStack to apply the corresponding mocked responses from the confi
For example, to run the `BaseCase` test case:
-{{< command >}}
-$ awslocal stepfunctions start-execution \
+```bash
+awslocal stepfunctions start-execution \
--state-machine arn:aws:states:us-east-1:000000000000:stateMachine:LambdaSQSIntegration#BaseCase \
--input '{"name": "John", "surname": "smith"}' \
--name "MockExecutionBaseCase"
-{{< /command >}}
+```
During execution, any state mapped in the mock config will use the predefined response.
States without mock entries invoke the actual emulated service as usual.
You can inspect the execution using the [`DescribeExecution`](https://docs.aws.amazon.com/step-functions/latest/apireference/API_DescribeExecution.html) API:
-{{< command >}}
-$ awslocal stepfunctions describe-execution \
+```bash
+awslocal stepfunctions describe-execution \
--execution-arn "arn:aws:states:us-east-1:000000000000:execution:LambdaSQSIntegration:MockExecutionBaseCase"
-{{< /command >}}
+```
The sample output shows the execution details, including the state machine ARN, execution ARN, status, start and stop dates, input, and output:
@@ -475,10 +479,10 @@ The sample output shows the execution details, including the state machine ARN,
You can also use the [`GetExecutionHistory`](https://docs.aws.amazon.com/step-functions/latest/apireference/API_GetExecutionHistory.html) API to retrieve the execution history, including the events and their details.
-{{< command >}}
-$ awslocal stepfunctions get-execution-history \
+```bash
+awslocal stepfunctions get-execution-history \
--execution-arn "arn:aws:states:us-east-1:000000000000:execution:LambdaSQSIntegration:MockExecutionBaseCase"
-{{< /command >}}
+```
This will return the full execution history, including entries that indicate how mocked responses were applied to Lambda and SQS states.
@@ -522,9 +526,7 @@ The LocalStack Web Application includes a **Resource Browser** for managing Step
To access it, open the LocalStack Web UI in your browser, navigate to the **Resource Browser** section, and click **Step Functions** under **App Integration**.
-
-
-
+
The Resource Browser allows you to perform the following actions:
diff --git a/src/content/docs/aws/services/sts.md b/src/content/docs/aws/services/sts.md
index c8e0dd52..4d30b154 100644
--- a/src/content/docs/aws/services/sts.md
+++ b/src/content/docs/aws/services/sts.md
@@ -1,6 +1,5 @@
---
title: "Security Token Service (STS)"
-linkTitle: "Security Token Service (STS)"
description: Get started with Security Token Service on LocalStack
persistence: supported
tags: ["Free"]
@@ -13,7 +12,7 @@ STS implements fine-grained access control and reduce the exposure of your long-
The temporary credentials, known as security tokens, can be used to access AWS services and resources based on the permissions specified in the associated policies.
LocalStack allows you to use the STS APIs in your local environment to request security tokens, manage permissions, integrate with identity providers, and more.
-The supported APIs are available on our [API coverage page]({{< ref "coverage_sts" >}}), which provides information on the extent of STS's integration with LocalStack.
+The supported APIs are available on our [API coverage page](), which provides information on the extent of STS's integration with LocalStack.
## Getting started
@@ -28,18 +27,18 @@ You can create an IAM User and Role using the [`CreateUser`](https://docs.aws.am
The IAM User will be used to assume the IAM Role.
Run the following command to create an IAM User, named `localstack-user`:
-{{< command >}}
-$ awslocal iam create-user \
+```bash
+awslocal iam create-user \
--user-name localstack-user
-{{< /command >}}
+```
You can generate long-term access keys for the IAM user using the [`CreateAccessKey`](https://docs.aws.amazon.com/STS/latest/APIReference/API_CreateAccessKey.html) API.
Run the following command to create an access key for the IAM user:
-{{< command >}}
-$ awslocal iam create-access-key \
+```bash
+awslocal iam create-access-key \
--user-name localstack-user
-{{< /command >}}
+```
The following output would be retrieved:
@@ -58,9 +57,9 @@ The following output would be retrieved:
Using STS, you can also fetch temporary credentials for this user using the [`GetSessionToken`](https://docs.aws.amazon.com/STS/latest/APIReference/API_GetSessionToken.html) API.
Run the following command using your long-term credentials to get your temporary credentials:
-{{< command >}}
-$ awslocal sts get-session-token
-{{< /command >}}
+```bash
+awslocal sts get-session-token
+```
The following output would be retrieved:
@@ -80,11 +79,11 @@ The following output would be retrieved:
You can now create an IAM Role, named `localstack-role`, using the [`CreateRole`](https://docs.aws.amazon.com/STS/latest/APIReference/API_CreateRole.html) API.
Run the following command to create the IAM Role:
-{{< command >}}
-$ awslocal iam create-role \
+```bash
+awslocal iam create-role \
--role-name localstack-role \
--assume-role-policy-document '{"Version":"2012-10-17","Statement":[{"Effect":"Allow","Principal":{"AWS":"arn:aws:iam::000000000000:root"},"Action":"sts:AssumeRole"}]}'
-{{< /command >}}
+```
The following output would be retrieved:
@@ -115,22 +114,22 @@ The following output would be retrieved:
You can attach the policy to the IAM role using the [`AttachRolePolicy`](https://docs.aws.amazon.com/STS/latest/APIReference/API_AttachRolePolicy.html) API.
Run the following command to attach the policy to the IAM role:
-{{< command >}}
-$ awslocal iam attach-role-policy \
+```bash
+awslocal iam attach-role-policy \
--role-name localstack-role \
--policy-arn arn:aws:iam::aws:policy/AdministratorAccess
-{{< /command >}}
+```
### Assume an IAM Role
You can assume an IAM Role using the [`AssumeRole`](https://docs.aws.amazon.com/STS/latest/APIReference/API_AssumeRole.html) API.
Run the following command to assume the IAM Role:
-{{< command >}}
-$ awslocal sts assume-role \
+```bash
+awslocal sts assume-role \
--role-arn arn:aws:iam::000000000000:role/localstack-role \
--role-session-name localstack-session
-{{< /command >}}
+```
The following output would be retrieved:
@@ -157,9 +156,9 @@ You can use the temporary credentials in your applications for temporary access.
You can get the caller identity to identify the principal your current credentials are valid for using the [`GetCallerIdentity`](https://docs.aws.amazon.com/STS/latest/APIReference/API_GetCallerIdentity.html) API.
Run the following command to get the caller identity for the credentials set in your environment:
-{{< command >}}
-$ awslocal sts get-caller-identity
-{{< /command >}}
+```bash
+awslocal sts get-caller-identity
+```
The following output would be retrieved:
diff --git a/src/content/docs/aws/services/support.md b/src/content/docs/aws/services/support.md
index 63e61890..bf82d4d6 100644
--- a/src/content/docs/aws/services/support.md
+++ b/src/content/docs/aws/services/support.md
@@ -1,6 +1,5 @@
---
title: "Support"
-linkTitle: "Support"
description: Get started with Support on LocalStack
persistence: supported
tags: ["Free"]
@@ -14,12 +13,12 @@ You can further automate your support workflow using various AWS services, such
LocalStack allows you to use the Support APIs in your local environment to create and manage new cases, while testing your configurations locally.
LocalStack provides a mock implementation via a mock Support Center provided by [Moto](https://docs.getmoto.org/en/latest/docs/services/support.html), and does not create real cases in the AWS.
-The supported APIs are available on our [API coverage page]({{< ref "coverage_support" >}}), which provides information on the extent of Support API's integration with LocalStack.
+The supported APIs are available on our [API coverage page](), which provides information on the extent of Support API's integration with LocalStack.
-{{< callout >}}
-For technical support with LocalStack, you can reach out through our [support channels]({{< ref "help-and-support" >}}).
+:::note
+For technical support with LocalStack, you can reach out through our [support channels](/aws/getting-started/help-support).
It's important to note that LocalStack doesn't offer a programmatic interface to create support cases, and this documentation is only intended to demonstrate how you can use and mock the AWS Support APIs in your local environment.
-{{< /callout >}}
+:::
## Getting started
@@ -33,13 +32,13 @@ We will demonstrate how you can create a case in the mock Support Center using t
To create a support case, you can use the [`CreateCase`](https://docs.aws.amazon.com/goto/WebAPI/support-2013-04-15/CreateCase) API.
The following example creates a case with the subject "Test case" and the description "This is a test case" in the category "General guidance".
-{{< command >}}
-$ awslocal support create-case \
+```bash
+awslocal support create-case \
--subject "Test case" \
--service-code "general-guidance" \
--category-code "general-guidance" \
--communication-body "This is a test case"
-{{< / command >}}
+```
The following output would be retrieved:
@@ -54,9 +53,9 @@ The following output would be retrieved:
To list all support cases, you can use the [`DescribeCases`](https://docs.aws.amazon.com/awssupport/latest/APIReference/API_DescribeCases.html) API.
The following example lists all cases in the category "General guidance".
-{{< command >}}
-$ awslocal support describe-cases
-{{< / command >}}
+```bash
+awslocal support describe-cases
+```
The following output would be retrieved:
@@ -89,10 +88,10 @@ The following output would be retrieved:
To resolve a support case, you can use the [`ResolveCase`](https://docs.aws.amazon.com/goto/WebAPI/support-2013-04-15/ResolveCase) API.
The following example resolves the case created in the previous step.
-{{< command >}}
-$ awslocal support resolve-case \
+```bash
+awslocal support resolve-case \
--case-id "case-12345678910-2020-kEa16f90bJE766J4"
-{{< / command >}}
+```
Replace the case ID with the ID of the case you want to resolve.
The following output would be retrieved:
diff --git a/src/content/docs/aws/services/swf.md b/src/content/docs/aws/services/swf.md
index 04038589..878c9f28 100644
--- a/src/content/docs/aws/services/swf.md
+++ b/src/content/docs/aws/services/swf.md
@@ -1,8 +1,6 @@
---
title: "Simple Workflow Service (SWF)"
-linkTitle: "Simple Workflow Service (SWF)"
-description: >
- Get started with Simple Workflow Service (SWF) on LocalStack
+description: Get started with Simple Workflow Service (SWF) on LocalStack
tags: ["Free"]
---
@@ -13,7 +11,7 @@ SWF allows you to define workflows in a way that's separate from the actual appl
SWF also provides a programming framework to design, coordinate, and execute workflows that involve multiple tasks, steps, and decision points.
LocalStack allows you to use the SWF APIs in your local environment to monitor and manage workflow design, task coordination, activity implementation, and error handling.
-The supported APIs are available on our [API coverage page]({{< ref "coverage_swf" >}}), which provides information on the extent of SWF's integration with LocalStack.
+The supported APIs are available on our [API coverage page](), which provides information on the extent of SWF's integration with LocalStack.
## Getting started
@@ -27,19 +25,19 @@ We will demonstrate how to register an SWF domain and workflow using the AWS CLI
You can register an SWF domain using the [`RegisterDomain`](https://docs.aws.amazon.com/amazonswf/latest/apireference/API_RegisterDomain.html) API.
Execute the following command to register a domain named `test-domain`:
-{{< command >}}
-$ awslocal swf register-domain \
+```bash
+awslocal swf register-domain \
--name test-domain \
--workflow-execution-retention-period-in-days 1
-{{< /command >}}
+```
You can use the [`DescribeDomain`](https://docs.aws.amazon.com/amazonswf/latest/apireference/API_DescribeDomain.html) API to verify that the domain was registered successfully.
Run the following command to describe the `test-domain` domain:
-{{< command >}}
-$ awslocal swf describe-domain \
+```bash
+awslocal swf describe-domain \
--name test-domain
-{{< /command >}}
+```
The following output would be retrieved:
@@ -61,31 +59,31 @@ The following output would be retrieved:
You can list all registered domains using the [`ListDomains`](https://docs.aws.amazon.com/amazonswf/latest/apireference/API_ListDomains.html) API.
Run the following command to list all registered domains:
-{{< command >}}
-$ awslocal swf list-domains --registration-status REGISTERED
-{{< /command >}}
+```bash
+awslocal swf list-domains --registration-status REGISTERED
+```
To deprecate a domain, use the [`DeprecateDomain`](https://docs.aws.amazon.com/amazonswf/latest/apireference/API_DeprecateDomain.html) API.
Run the following command to deprecate the `test-domain` domain:
-{{< command >}}
-$ awslocal swf deprecate-domain \
+```bash
+awslocal swf deprecate-domain \
--name test-domain
-{{< /command >}}
+```
You can now list the deprecated domains using the `--registration-status DEPRECATED` flag:
-{{< command >}}
-$ awslocal swf list-domains --registration-status DEPRECATED
-{{< /command >}}
+```bash
+awslocal swf list-domains --registration-status DEPRECATED
+```
### Registering a workflow
You can register a workflow using the [`RegisterWorkflowType`](https://docs.aws.amazon.com/amazonswf/latest/apireference/API_RegisterWorkflowType.html) API.
Execute the following command to register a workflow named `test-workflow`:
-{{< command >}}
-$ awslocal swf register-workflow-type \
+```bash
+awslocal swf register-workflow-type \
--domain test-domain \
--name test-workflow \
--default-task-list name=test-task-list \
@@ -93,16 +91,16 @@ $ awslocal swf register-workflow-type \
--default-execution-start-to-close-timeout 60 \
--default-child-policy TERMINATE \
--workflow-version "1.0"
-{{< /command >}}
+```
You can use the [`DescribeWorkflowType`](https://docs.aws.amazon.com/amazonswf/latest/apireference/API_DescribeWorkflowType.html) API to verify that the workflow was registered successfully.
Run the following command to describe the `test-workflow` workflow:
-{{< command >}}
-$ awslocal swf describe-workflow-type \
+```bash
+awslocal swf describe-workflow-type \
--domain test-domain \
--workflow-type name=test-workflow,version=1.0
-{{< /command >}}
+```
The following output would be retrieved:
@@ -132,8 +130,8 @@ The following output would be retrieved:
You can register an activity using the [`RegisterActivityType`](https://docs.aws.amazon.com/amazonswf/latest/apireference/API_RegisterActivityType.html) API.
Execute the following command to register an activity named `test-activity`:
-{{< command >}}
-$ awslocal swf register-activity-type \
+```bash
+awslocal swf register-activity-type \
--domain test-domain \
--name test-activity \
--default-task-list name=test-task-list \
@@ -142,16 +140,16 @@ $ awslocal swf register-activity-type \
--default-task-schedule-to-start-timeout 30 \
--default-task-schedule-to-close-timeout 30 \
--activity-version "1.0"
-{{< /command >}}
+```
You can use the [`DescribeActivityType`](https://docs.aws.amazon.com/amazonswf/latest/apireference/API_DescribeActivityType.html) API to verify that the activity was registered successfully.
Run the following command to describe the `test-activity` activity:
-{{< command >}}
-$ awslocal swf describe-activity-type \
+```bash
+awslocal swf describe-activity-type \
--domain test-domain \
--activity-type name=test-activity,version=1.0
-{{< /command >}}
+```
The following output would be retrieved:
@@ -182,14 +180,14 @@ The following output would be retrieved:
You can start a workflow execution using the [`StartWorkflowExecution`](https://docs.aws.amazon.com/amazonswf/latest/apireference/API_StartWorkflowExecution.html) API.
Execute the following command to start a workflow execution for the `test-workflow` workflow:
-{{< command >}}
-$ awslocal swf start-workflow-execution \
+```bash
+awslocal swf start-workflow-execution \
--domain test-domain \
--workflow-type name=test-workflow,version=1.0 \
--workflow-id test-workflow-id \
--task-list name=test-task-list \
--input '{"foo": "bar"}'
-{{< /command >}}
+```
The following output would be retrieved:
diff --git a/src/content/docs/aws/services/textract.md b/src/content/docs/aws/services/textract.md
index 0f218da3..f5a12dbc 100644
--- a/src/content/docs/aws/services/textract.md
+++ b/src/content/docs/aws/services/textract.md
@@ -1,6 +1,5 @@
---
title: "Textract"
-linkTitle: "Textract"
description: Get started with Textract on LocalStack
tags: ["Ultimate"]
persistence: supported
@@ -10,7 +9,7 @@ Textract is a machine learning service that automatically extracts text, forms,
It simplifies the process of extracting valuable information from a variety of document types, enabling applications to quickly analyze and understand document content.
LocalStack allows you to mock Textract APIs in your local environment.
-The supported APIs are available on our [API coverage page]({{< ref "coverage_textract" >}}), providing details on the extent of Textract's integration with LocalStack.
+The supported APIs are available on our [API coverage page](), providing details on the extent of Textract's integration with LocalStack.
## Getting started
@@ -24,10 +23,10 @@ We will demonstrate how to perform basic Textract operations, such as mocking te
You can use the [`DetectDocumentText`](https://docs.aws.amazon.com/textract/latest/dg/API_DetectDocumentText.html) API to identify and extract text from a document.
Execute the following command:
-{{< command >}}
-$ awslocal textract detect-document-text \
+```bash
+awslocal textract detect-document-text \
--document '{"S3Object":{"Bucket":"your-bucket","Name":"your-document"}}'
-{{< /command >}}
+```
The following output would be retrieved:
@@ -48,10 +47,10 @@ The following output would be retrieved:
You can use the [`StartDocumentTextDetection`](https://docs.aws.amazon.com/textract/latest/dg/API_StartDocumentTextDetection.html) API to asynchronously detect text in a document.
Execute the following command:
-{{< command >}}
-$ awslocal textract start-document-text-detection \
+```bash
+awslocal textract start-document-text-detection \
--document-location '{"S3Object":{"Bucket":"bucket","Name":"document"}}'
-{{< /command >}}
+```
The following output would be retrieved:
@@ -68,10 +67,10 @@ Save the `JobId` value to use in the next command.
You can use the [`GetDocumentTextDetection`](https://docs.aws.amazon.com/textract/latest/dg/API_GetDocumentTextDetection.html) API to retrieve the results of a document text detection job.
Execute the following command:
-{{< command >}}
-$ awslocal textract get-document-text-detection \
+```bash
+awslocal textract get-document-text-detection \
--job-id "501d7251-1249-41e0-a0b3-898064bfc506"
-{{< /command >}}
+```
Replace `501d7251-1249-41e0-a0b3-898064bfc506` with the `JobId` value retrieved from the previous command.
The following output would be retrieved:
diff --git a/src/content/docs/aws/services/timestream.md b/src/content/docs/aws/services/timestream.md
index bf5c3cf3..ad29b52b 100644
--- a/src/content/docs/aws/services/timestream.md
+++ b/src/content/docs/aws/services/timestream.md
@@ -1,6 +1,5 @@
---
title: "Timestream"
-linkTitle: "Timestream"
description: Get started with Timestream on LocalStack
tags: ["Ultimate"]
persistence: supported
@@ -15,7 +14,7 @@ LocalStack contains basic support for Timestream time series databases, includin
* Writing records to tables
* Querying timeseries data from tables
-The supported APIs are available on our API Coverage Page ([Timestream-Query]({{< ref "coverage_timestream-query" >}})/[Timestream-Write]({{< ref "coverage_timestream-write" >}})), which provides information on the extent of Timestream integration with LocalStack.
+The supported APIs are available on our API Coverage Page ([Timestream-Query]()/[Timestream-Write](), which provides information on the extent of Timestream integration with LocalStack.
## Getting Started
@@ -23,22 +22,28 @@ The following example illustrates the basic operations, using the [`awslocal`](h
First, we create a test database and table:
-{{< command >}}
-$ awslocal timestream-write create-database --database-name testDB
-$ awslocal timestream-write create-table --database-name testDB --table-name testTable
-{{ command >}}
+```bash
+awslocal timestream-write create-database --database-name testDB
+awslocal timestream-write create-table --database-name testDB --table-name testTable
+```
We can then add a few records with a timestamp, measure name, and value to the table:
-{{< command >}}
-$ awslocal timestream-write write-records --database-name testDB --table-name testTable --records '[{"MeasureName":"cpu","MeasureValue":"60","TimeUnit":"SECONDS","Time":"1636986409"}]'
-$ awslocal timestream-write write-records --database-name testDB --table-name testTable --records '[{"MeasureName":"cpu","MeasureValue":"80","TimeUnit":"SECONDS","Time":"1636986412"}]'
-$ awslocal timestream-write write-records --database-name testDB --table-name testTable --records '[{"MeasureName":"cpu","MeasureValue":"70","TimeUnit":"SECONDS","Time":"1636986414"}]'
-{{ command >}}
+```bash
+awslocal timestream-write write-records --database-name testDB --table-name testTable --records '[{"MeasureName":"cpu","MeasureValue":"60","TimeUnit":"SECONDS","Time":"1636986409"}]'
+awslocal timestream-write write-records --database-name testDB --table-name testTable --records '[{"MeasureName":"cpu","MeasureValue":"80","TimeUnit":"SECONDS","Time":"1636986412"}]'
+awslocal timestream-write write-records --database-name testDB --table-name testTable --records '[{"MeasureName":"cpu","MeasureValue":"70","TimeUnit":"SECONDS","Time":"1636986414"}]'
+```
Finally, we can run a query to retrieve the timeseries data (or aggregate values) from the table:
-{{< command >}}
-$ awslocal timestream-query query --query-string "SELECT CREATE_TIME_SERIES(time, measure_value::double) as cpu FROM testDB.timeStreamTable WHERE measure_name='cpu'"
+
+```bash
+awslocal timestream-query query --query-string "SELECT CREATE_TIME_SERIES(time, measure_value::double) as cpu FROM testDB.timeStreamTable WHERE measure_name='cpu'"
+```
+
+The following output would be retrieved:
+
+```bash
{
"Rows": [{
"Data": [{
@@ -49,16 +54,14 @@ $ awslocal timestream-query query --query-string "SELECT CREATE_TIME_SERIES(time
}
},
...
-{{ command >}}
+```
## Resource Browser
The LocalStack Web Application provides a Resource Browser for managing Timestream databases.
You can access the Resource Browser by opening the LocalStack Web Application in your browser, navigating to the **Resources** section, and then clicking on **Timestream** under the **Database** section.
-
-
-
+
The Resource Browser allows you to perform the following actions:
@@ -70,6 +73,6 @@ The Resource Browser allows you to perform the following actions:
## Current Limitations
-LocalStack's Timestream implementation is under active development and only supports a limited set of operations, please refer to the API Coverage pages for an up-to-date list of implemented and tested functions within [Timestream-Query]({{< ref "coverage_timestream-query" >}}) and [Timestream-Write]({{< ref "coverage_timestream-write" >}}).
+LocalStack's Timestream implementation is under active development and only supports a limited set of operations, please refer to the API Coverage pages for an up-to-date list of implemented and tested functions within [Timestream-Query]() and [Timestream-Write]().
If you have a usecase that uses Timestream but doesn't work with our implementation yet, we encourage you to [get in touch](https://localstack.cloud/contact/), so we can streamline any operations you rely on.
diff --git a/src/content/docs/aws/services/transcribe.md b/src/content/docs/aws/services/transcribe.md
index 331c4c58..732cb17c 100644
--- a/src/content/docs/aws/services/transcribe.md
+++ b/src/content/docs/aws/services/transcribe.md
@@ -1,6 +1,5 @@
---
title: "Transcribe"
-linkTitle: "Transcribe"
description: Get started with Amazon Transcribe on LocalStack
persistence: supported
tags: ["Free"]
@@ -12,12 +11,12 @@ Transcribe is a service provided by AWS that offers automatic speech recognition
It enables developers to convert spoken language into written text, making it valuable for a wide range of applications, from transcription services to voice analytics.
LocalStack allows you to use the Transcribe APIs for offline speech-to-text jobs in your local environment.
-The supported APIs are available on our [API Coverage Page]({{< ref "coverage_transcribe" >}}), which provides information on the extent of Transcribe integration with LocalStack.
+The supported APIs are available on our [API Coverage Page](), which provides information on the extent of Transcribe integration with LocalStack.
LocalStack Transcribe uses an offline speech-to-text library called [Vosk](https://alphacephei.com/vosk/).
It requires an active internet connection to download the language model.
Once the language model is downloaded, subsequent transcriptions for the same language can be performed offline.
-Language models typically have a size of around 50 MiB and are saved in the cache directory (see [Filesystem Layout]({{< ref "filesystem" >}})).
+Language models typically have a size of around 50 MiB and are saved in the cache directory (see [Filesystem Layout](/aws/capabilities/config/filesystem)).
## Getting Started
@@ -31,29 +30,33 @@ We will demonstrate how to create a transcription job and view the transcript in
You can create an S3 bucket using the [`mb`](https://docs.aws.amazon.com/cli/latest/reference/s3/mb.html) command.
Run the following command to create a bucket named `foo` to upload a sample audio file named `example.wav`:
-{{< command >}}
-$ awslocal s3 mb s3://foo
-$ awslocal s3 cp ~/example.wav s3://foo/example.wav
-{{< / command >}}
+```bash
+awslocal s3 mb s3://foo
+awslocal s3 cp ~/example.wav s3://foo/example.wav
+```
### Create a transcription job
You can create a transcription job using the [`StartTranscriptionJob`](https://docs.aws.amazon.com/transcribe/latest/APIReference/API_StartTranscriptionJob.html) API.
Run the following command to create a transcription job named `example` for the audio file `example.wav`:
-{{< command >}}
-$ awslocal transcribe start-transcription-job \
+```bash
+awslocal transcribe start-transcription-job \
--transcription-job-name example \
--media MediaFileUri=s3://foo/example.wav \
--language-code en-IN
-{{< / command >}}
+```
You can list the transcription jobs using the [`ListTranscriptionJobs`](https://docs.aws.amazon.com/transcribe/latest/APIReference/API_ListTranscriptionJobs.html) API.
Run the following command to list the transcription jobs:
-{{< command >}}
-$ awslocal transcribe list-transcription-jobs
-
+```bash
+awslocal transcribe list-transcription-jobs
+```
+
+The following output would be retrieved:
+
+```bash
{
"TranscriptionJobSummaries": [
{
@@ -65,17 +68,20 @@ $ awslocal transcribe list-transcription-jobs
}
]
}
-
-{{< / command >}}
+```
### View the transcript
After the job is complete, the transcript can be retrieved from the S3 bucket using the [`GetTranscriptionJob`](https://docs.aws.amazon.com/transcribe/latest/APIReference/API_GetTranscriptionJob.html) API.
Run the following command to get the transcript:
-{{< command >}}
-$ awslocal transcribe get-transcription-job --transcription-job example
-
+```bash
+awslocal transcribe get-transcription-job --transcription-job example
+```
+
+The following output would be retrieved:
+
+```bash
{
"TranscriptionJob": {
"TranscriptionJobName": "example",
@@ -93,13 +99,20 @@ $ awslocal transcribe get-transcription-job --transcription-job example
"CompletionTime": "2022-08-17T14:04:57.400000+05:30",
}
}
-
-$ awslocal s3 cp s3://foo/7844aaa5.json .
-$ jq .results.transcripts[0].transcript 7844aaa5.json
-
+```
+
+You can then view the transcript by running the following command:
+
+```bash
+awslocal s3 cp s3://foo/7844aaa5.json .
+jq .results.transcripts[0].transcript 7844aaa5.json
+```
+
+The following output would be retrieved:
+
+```bash
"it is just a question of getting rid of the illusion that we are separate from nature"
-
-{{< / command >}}
+```
## Audio Formats
@@ -150,9 +163,7 @@ The following languages and dialects are supported:
The LocalStack Web Application provides a Resource Browser for managing Transcribe Transcription Jobs.
You can access the Resource Browser by opening the LocalStack Web Application in your browser, navigating to the **Resource Browser** section, and then clicking on **Transcribe Service** under the **Machine Learning** section.
-
-
-
+
The Resource Browser allows you to perform the following actions:
diff --git a/src/content/docs/aws/services/transfer.md b/src/content/docs/aws/services/transfer.md
index 468c6959..442c5df0 100644
--- a/src/content/docs/aws/services/transfer.md
+++ b/src/content/docs/aws/services/transfer.md
@@ -2,8 +2,7 @@
title: "Transfer"
linkTitle: "Transfer"
tags: ["Ultimate"]
-description: >
- Get started with Amazon Transfer on LocalStack
+description: Get started with Transfer on LocalStack
---
## Introduction
diff --git a/src/content/docs/aws/services/verifiedpermissions.md b/src/content/docs/aws/services/verifiedpermissions.md
index c94cf47c..f371cc00 100644
--- a/src/content/docs/aws/services/verifiedpermissions.md
+++ b/src/content/docs/aws/services/verifiedpermissions.md
@@ -1,6 +1,5 @@
---
title: "Verified Permissions"
-linkTitle: "Verified Permissions"
description: Get started with Verified Permissions on LocalStack
tags: ["Ultimate"]
---
@@ -12,7 +11,7 @@ It helps secure applications by moving authorization logic outside the app and m
It checks if a principal can take an action on a resource in a specific context in your application.
LocalStack allows you to use the Verified Permissions APIs in your local environment to test your authorization logic, with integrations with other AWS services like Cognito.
-The supported APIs are available on our [API coverage page]({{< ref "coverage_verifiedpermissions" >}}), which provides information on the extent of Verified Permissions' integration with LocalStack.
+The supported APIs are available on our [API coverage page](), which provides information on the extent of Verified Permissions' integration with LocalStack.
## Getting started
@@ -26,11 +25,11 @@ We will demonstrate how to create a Verified Permissions Policy Store, add a pol
To create a Verified Permissions Policy Store, use the [`CreatePolicyStore`](https://docs.aws.amazon.com/verifiedpermissions/latest/apireference/API_CreatePolicyStore.html) API.
Run the following command to create a Policy Store with Schema validation settings set to `OFF`:
-{{< command >}}
-$ awslocal verifiedpermissions create-policy-store \
+```bash
+awslocal verifiedpermissions create-policy-store \
--validation-settings mode=OFF \
--description "A local Policy Store"
-{{< /command >}}
+```
The above command returns the following response:
@@ -46,9 +45,9 @@ The above command returns the following response:
You can list all the Verified Permissions policy stores using the [`ListPolicyStores`](https://docs.aws.amazon.com/verifiedpermissions/latest/apireference/API_ListPolicyStores.html) API.
Run the following command to list all the Verified Permissions policy stores:
-{{< command >}}
-$ awslocal verifiedpermissions list-policy-stores
-{{< /command >}}
+```bash
+awslocal verifiedpermissions list-policy-stores
+```
### Create a Policy
@@ -66,11 +65,12 @@ Create a JSON file named `static_policy.json` with the following content:
```
You can then run this command to create the policy:
-{{< command >}}
-$ awslocal verifiedpermissions create-policy \
+
+```bash
+awslocal verifiedpermissions create-policy \
--definition file://static_policy.json \
--policy-store-id q5PCScu9qo4aswMVc0owNN
-{{< /command >}}
+```
Replace the policy store ID with the ID of the policy store you created previously.
@@ -106,13 +106,13 @@ You should see the following output:
We can now make use of the Policy Store and the Policy to start authorizing requests.
To authorize a request using Verified Permissions, use the [`IsAuthorized`](https://docs.aws.amazon.com/verifiedpermissions/latest/apireference/API_IsAuthorized.html) API.
-{{< command >}}
-$ awslocal verifiedpermissions is-authorized \
+```bash
+awslocal verifiedpermissions is-authorized \
--policy-store-id q5PCScu9qo4aswMVc0owNN \
--principal entityType=User,entityId=alice \
--action actionType=Action,actionId=view \
--resource entityType=Album,entityId=trip
-{{< /command >}}
+```
You should get the following output, indicating that your request was allowed:
diff --git a/src/content/docs/aws/services/waf.md b/src/content/docs/aws/services/waf.md
index ca7b2ce3..5ca30116 100644
--- a/src/content/docs/aws/services/waf.md
+++ b/src/content/docs/aws/services/waf.md
@@ -1,6 +1,5 @@
---
title: "Web Application Firewall (WAF)"
-linkTitle: "Web Application Firewall (WAF)"
description: Get started with Web Application Firewall (WAF) on LocalStack
tags: ["Ultimate"]
---
@@ -11,7 +10,7 @@ Web Application Firewall (WAF) is a service provided by Amazon Web Services (AWS
WAFv2 is the latest version of WAF, and it allows you to specify a single set of rules to protect your web applications, APIs, and mobile applications from common attack patterns, such as SQL injection and cross-site scripting.
LocalStack allows you to use the WAFv2 APIs for offline web application firewall jobs in your local environment.
-The supported APIs are available on our [API Coverage Page]({{< ref "coverage_wafv2" >}}), which provides information on the extent of WAFv2 integration with LocalStack.
+The supported APIs are available on our [API Coverage Page](), which provides information on the extent of WAFv2 integration with LocalStack.
## Getting started
@@ -25,13 +24,17 @@ We will walk you through creating, listing, tagging, and viewing tags for Web Ac
Start by creating a Web Access Control List (WebACL) using the [`CreateWebACL`](https://docs.aws.amazon.com/waf/latest/APIReference/API_CreateWebACL.html) API.
Run the following command to create a WebACL named `TestWebAcl`:
-{{< command >}}
-$ awslocal wafv2 create-web-acl \
+```bash
+awslocal wafv2 create-web-acl \
--name TestWebAcl \
--scope REGIONAL \
--default-action Allow={} \
--visibility-config SampledRequestsEnabled=true,CloudWatchMetricsEnabled=true,MetricName=TestWebAclMetrics
-
+```
+
+The following output would be retrieved:
+
+```json
{
"Summary": {
"Name": "TestWebAcl",
@@ -40,8 +43,7 @@ $ awslocal wafv2 create-web-acl \
"ARN": "arn:aws:wafv2:us-east-1:000000000000:regional/webacl/TestWebAcl/f94fd5bc-e4d4-4280-9f53-51e9441ad51d"
}
}
-
-{{< /command >}}
+```
Note the `Id` and `ARN` from the output, as they will be needed for subsequent commands.
@@ -50,9 +52,13 @@ Note the `Id` and `ARN` from the output, as they will be needed for subsequent c
To view all the WebACLs you have created, use the [`ListWebACLs`](https://docs.aws.amazon.com/waf/latest/APIReference/API_ListWebACLs.html) API.
Run the following command to list the WebACLs:
-{{< command >}}
-$ awslocal wafv2 list-web-acls --scope REGIONAL
-
+```bash
+awslocal wafv2 list-web-acls --scope REGIONAL
+```
+
+The following output would be retrieved:
+
+```json
{
"NextMarker": "Not Implemented",
"WebACLs": [
@@ -64,8 +70,7 @@ $ awslocal wafv2 list-web-acls --scope REGIONAL
}
]
}
-
-{{< /command >}}
+```
### Tag a WebACL
@@ -73,20 +78,24 @@ Tagging resources in AWS WAF helps you manage and identify them.
Use the [`TagResource`](https://docs.aws.amazon.com/waf/latest/APIReference/API_TagResource.html) API to add tags to a WebACL.
Run the following command to add a tag to the WebACL created in the previous step:
-{{< command >}}
-$ awslocal wafv2 tag-resource \
+```bash
+awslocal wafv2 tag-resource \
--resource-arn arn:aws:wafv2:us-east-1:000000000000:regional/webacl/TestWebAcl/f94fd5bc-e4d4-4280-9f53-51e9441ad51d \
--tags Key=Name,Value=AWSWAF
-{{< /command >}}
+```
After tagging your resources, you may want to view these tags.
Use the [`ListTagsForResource`](https://docs.aws.amazon.com/waf/latest/APIReference/API_ListTagsForResource.html) API to list the tags for a WebACL.
Run the following command to list the tags for the WebACL created in the previous step:
-{{< command >}}
-$ awslocal wafv2 list-tags-for-resource \
+```bash
+awslocal wafv2 list-tags-for-resource \
--resource-arn arn:aws:wafv2:us-east-1:000000000000:regional/webacl/TestWebAcl/f94fd5bc-e4d4-4280-9f53-51e9441ad51d
-
+```
+
+The following output would be retrieved:
+
+```json
{
"TagInfoForResource": {
"ResourceARN": "arn:aws:wafv2:us-east-1:000000000000:regional/webacl/TestWebAcl/f94fd5bc-e4d4-4280-9f53-51e9441ad51d",
@@ -98,5 +107,4 @@ $ awslocal wafv2 list-tags-for-resource \
]
}
}
-
-{{< /command >}}
+```
diff --git a/src/content/docs/aws/services/xray.md b/src/content/docs/aws/services/xray.md
index 16f28f68..ac5073b6 100644
--- a/src/content/docs/aws/services/xray.md
+++ b/src/content/docs/aws/services/xray.md
@@ -1,6 +1,5 @@
---
title: "X-Ray"
-linkTitle: "X-Ray"
description: Get started with X-Ray on LocalStack
tags: ["Ultimate"]
---
@@ -20,7 +19,7 @@ The X-Ray API can then be used to retrieve traces originating from different app
LocalStack allows
you to use the X-Ray APIs to send and retrieve trace segments in your local environment.
-The supported APIs are available on our [API Coverage Page]({{< ref "coverage_xray" >}}),
+The supported APIs are available on our [API Coverage Page](),
which provides information on the extent of X-Ray integration with LocalStack.
## Getting started
@@ -41,35 +40,37 @@ You can generates a unique trace ID and constructs a JSON document with trace in
It then sends this trace segment to the AWS X-Ray API using the [PutTraceSegments](https://docs.aws.amazon.com/xray/latest/api/API_PutTraceSegments.html) API.
Run the following commands in your terminal:
-{{< command >}}
-$ START_TIME=$(date +%s)
-$ HEX_TIME=$(printf '%x\n' $START_TIME)
-$ GUID=$(dd if=/dev/random bs=12 count=1 2>/dev/null | od -An -tx1 | tr -d ' \t\n')
-$ TRACE_ID="1-$HEX_TIME-$GUID"
-$ END_TIME=$(($START_TIME+3))
-$ DOC=$(cat </dev/null | od -An -tx1 | tr -d ' \t\n')
+TRACE_ID="1-$HEX_TIME-$GUID"
+END_TIME=$(($START_TIME+3))
+DOC=$(cat <
+echo "Sending trace segment to X-Ray API: $DOC"
+awslocal xray put-trace-segments --trace-segment-documents "$DOC"
+```
+
+The following output would be retrieved:
+
+```json
Sending trace segment to X-Ray API: {"trace_id": "1-6501ee11-056ec85fafff21f648e2d3ae", "id": "6226467e3f845502", "start_time": 1694625297.37518, "end_time": 1694625300.4042, "name": "test.elasticbeanstalk.com"}
{
"UnprocessedTraceSegments": []
}
-