Mock sample for your project: Application Auto Scaling API

Integrate with "Application Auto Scaling API" from amazonaws.com in no time with Mockoon's ready to use mock sample

Application Auto Scaling

amazonaws.com

Version: 2016-02-06


Use this API in your project

Start working with "Application Auto Scaling API" right away by using this ready-to-use mock sample. API mocking can greatly speed up your application development by removing all the tedious tasks or issues: API key provisioning, account creation, unplanned downtime, etc.
It also helps reduce your dependency on third-party APIs and improves your integration tests' quality and reliability by accounting for random failures, slow response time, etc.

Description

With Application Auto Scaling, you can configure automatic scaling for the following resources: Amazon AppStream 2.0 fleets Amazon Aurora Replicas Amazon Comprehend document classification and entity recognizer endpoints Amazon DynamoDB tables and global secondary indexes throughput capacity Amazon ECS services Amazon ElastiCache for Redis clusters (replication groups) Amazon EMR clusters Amazon Keyspaces (for Apache Cassandra) tables Lambda function provisioned concurrency Amazon Managed Streaming for Apache Kafka broker storage Amazon SageMaker endpoint variants Spot Fleet (Amazon EC2) requests Custom resources provided by your own applications or services API Summary The Application Auto Scaling service API includes three key sets of actions: Register and manage scalable targets - Register Amazon Web Services or custom resources as scalable targets (a resource that Application Auto Scaling can scale), set minimum and maximum capacity limits, and retrieve information on existing scalable targets. Configure and manage automatic scaling - Define scaling policies to dynamically scale your resources in response to CloudWatch alarms, schedule one-time or recurring scaling actions, and retrieve your recent scaling activity history. Suspend and resume scaling - Temporarily suspend and later resume automatic scaling by calling the RegisterScalableTarget API action for any Application Auto Scaling scalable target. You can suspend and resume (individually or in combination) scale-out activities that are triggered by a scaling policy, scale-in activities that are triggered by a scaling policy, and scheduled scaling. To learn more about Application Auto Scaling, including information about granting IAM users required permissions for Application Auto Scaling actions, see the Application Auto Scaling User Guide.

Other APIs by amazonaws.com

Amazon Kinesis Analytics

Amazon Kinesis Analytics Overview This documentation is for version 1 of the Amazon Kinesis Data Analytics API, which only supports SQL applications. Version 2 of the API supports SQL and Java applications. For more information about version 2, see Amazon Kinesis Data Analytics API V2 Documentation. This is the Amazon Kinesis Analytics v1 API Reference. The Amazon Kinesis Analytics Developer Guide provides additional information.

Route53 Recovery Cluster

Welcome to the Amazon Route 53 Application Recovery Controller API Reference Guide for Recovery Control Data Plane . Recovery control in Route 53 Application Recovery Controller includes extremely reliable routing controls that enable you to recover applications by rerouting traffic, for example, across Availability Zones or AWS Regions. Routing controls are simple on/off switches hosted on a cluster. A cluster is a set of five redundant regional endpoints against which you can execute API calls to update or get the state of routing controls. You use routing controls to failover traffic to recover your application across Availability Zones or Regions. This API guide includes information about how to get and update routing control states in Route 53 Application Recovery Controller. For more information about Route 53 Application Recovery Controller, see the following: You can create clusters, routing controls, and control panels by using the control plane API for Recovery Control. For more information, see Amazon Route 53 Application Recovery Controller Recovery Control API Reference. Route 53 Application Recovery Controller also provides continuous readiness checks to ensure that your applications are scaled to handle failover traffic. For more information about the related API actions, see Amazon Route 53 Application Recovery Controller Recovery Readiness API Reference. For more information about creating resilient applications and preparing for recovery readiness with Route 53 Application Recovery Controller, see the Amazon Route 53 Application Recovery Controller Developer Guide.

Amazon Simple Queue Service

Welcome to the Amazon SQS API Reference. Amazon SQS is a reliable, highly-scalable hosted queue for storing messages as they travel between applications or microservices. Amazon SQS moves data between distributed application components and helps you decouple these components. For information on the permissions you need to use this API, see Identity and access management in the Amazon SQS Developer Guide. You can use Amazon Web Services SDKs to access Amazon SQS using your favorite programming language. The SDKs perform tasks such as the following automatically: Cryptographically sign your service requests Retry requests Handle error responses Additional information Amazon SQS Product Page Amazon SQS Developer Guide Making API Requests Amazon SQS Message Attributes Amazon SQS Dead-Letter Queues Amazon SQS in the Command Line Interface Amazon Web Services General Reference Regions and Endpoints

Amazon Relational Database Service

Amazon Relational Database Service Amazon Relational Database Service (Amazon RDS) is a web service that makes it easier to set up, operate, and scale a relational database in the cloud. It provides cost-efficient, resizeable capacity for an industry-standard relational database and manages common database administration tasks, freeing up developers to focus on what makes their applications and businesses unique. Amazon RDS gives you access to the capabilities of a MySQL, MariaDB, PostgreSQL, Microsoft SQL Server, Oracle, or Amazon Aurora database server. These capabilities mean that the code, applications, and tools you already use today with your existing databases work with Amazon RDS without modification. Amazon RDS automatically backs up your database and maintains the database software that powers your DB instance. Amazon RDS is flexible: you can scale your DB instance's compute resources and storage capacity to meet your application's demand. As with all Amazon Web Services, there are no up-front investments, and you pay only for the resources you use. This interface reference for Amazon RDS contains documentation for a programming or command line interface you can use to manage Amazon RDS. Amazon RDS is asynchronous, which means that some interfaces might require techniques such as polling or callback functions to determine when a command has been applied. In this reference, the parameter descriptions indicate whether a command is applied immediately, on the next instance reboot, or during the maintenance window. The reference structure is as follows, and we list following some related topics from the user guide. Amazon RDS API Reference For the alphabetical list of API actions, see API Actions. For the alphabetical list of data types, see Data Types. For a list of common query parameters, see Common Parameters. For descriptions of the error codes, see Common Errors. Amazon RDS User Guide For a summary of the Amazon RDS interfaces, see Available RDS Interfaces. For more information about how to use the Query API, see Using the Query API.

Redshift Data API Service

You can use the Amazon Redshift Data API to run queries on Amazon Redshift tables. You can run SQL statements, which are committed if the statement succeeds. For more information about the Amazon Redshift Data API, see Using the Amazon Redshift Data API in the Amazon Redshift Cluster Management Guide.

Amazon SimpleDB

Amazon SimpleDB is a web service providing the core database functions of data indexing and querying in the cloud. By offloading the time and effort associated with building and operating a web-scale database, SimpleDB provides developers the freedom to focus on application development. A traditional, clustered relational database requires a sizable upfront capital outlay, is complex to design, and often requires extensive and repetitive database administration. Amazon SimpleDB is dramatically simpler, requiring no schema, automatically indexing your data and providing a simple API for storage and access. This approach eliminates the administrative burden of data modeling, index maintenance, and performance tuning. Developers gain access to this functionality within Amazon's proven computing environment, are able to scale instantly, and pay only for what they use. Visit http://aws.amazon.com/simpledb/ for more information.

AWSServerlessApplicationRepository

The AWS Serverless Application Repository makes it easy for developers and enterprises to quickly find
and deploy serverless applications in the AWS Cloud. For more information about serverless applications,
see Serverless Computing and Applications on the AWS website. The AWS Serverless Application Repository is deeply integrated with the AWS Lambda console, so that developers of
all levels can get started with serverless computing without needing to learn anything new. You can use category
keywords to browse for applications such as web and mobile backends, data processing applications, or chatbots.
You can also search for applications by name, publisher, or event source. To use an application, you simply choose it,
configure any required fields, and deploy it with a few clicks. You can also easily publish applications, sharing them publicly with the community at large, or privately
within your team or across your organization. To publish a serverless application (or app), you can use the
AWS Management Console, AWS Command Line Interface (AWS CLI), or AWS SDKs to upload the code. Along with the
code, you upload a simple manifest file, also known as the AWS Serverless Application Model (AWS SAM) template.
For more information about AWS SAM, see AWS Serverless Application Model (AWS SAM) on the AWS Labs
GitHub repository. The AWS Serverless Application Repository Developer Guide contains more information about the two developer
experiences available:
Consuming Applications – Browse for applications and view information about them, including
source code and readme files. Also install, configure, and deploy applications of your choosing.
Publishing Applications – Configure and upload applications to make them available to other
developers, and publish new versions of applications.

Amazon Lex Runtime V2

AWS Savings Plans

Savings Plans are a pricing model that offer significant savings on AWS usage (for example, on Amazon EC2 instances). You commit to a consistent amount of usage, in USD per hour, for a term of 1 or 3 years, and receive a lower price for that usage. For more information, see the AWS Savings Plans User Guide.

Amazon Route 53

Amazon Route 53 is a highly available and scalable Domain Name System (DNS) web service.

Amazon Simple Email Service

Amazon SES API v2 Welcome to the Amazon SES API v2 Reference. This guide provides information about the Amazon SES API v2, including supported operations, data types, parameters, and schemas. Amazon SES is an AWS service that you can use to send email messages to your customers. If you're new to Amazon SES API v2, you might find it helpful to also review the Amazon Simple Email Service Developer Guide. The Amazon SES Developer Guide provides information and code samples that demonstrate how to use Amazon SES API v2 features programmatically. The Amazon SES API v2 is available in several AWS Regions and it provides an endpoint for each of these Regions. For a list of all the Regions and endpoints where the API is currently available, see AWS Service Endpoints in the Amazon Web Services General Reference. To learn more about AWS Regions, see Managing AWS Regions in the Amazon Web Services General Reference. In each Region, AWS maintains multiple Availability Zones. These Availability Zones are physically isolated from each other, but are united by private, low-latency, high-throughput, and highly redundant network connections. These Availability Zones enable us to provide very high levels of availability and redundancy, while also minimizing latency. To learn more about the number of Availability Zones that are available in each Region, see AWS Global Infrastructure.

Amazon WorkDocs

The WorkDocs API is designed for the following use cases: File Migration: File migration applications are supported for users who want to migrate their files from an on-premises or off-premises file system or service. Users can insert files into a user directory structure, as well as allow for basic metadata changes, such as modifications to the permissions of files. Security: Support security applications are supported for users who have additional security needs, such as antivirus or data loss prevention. The API actions, along with AWS CloudTrail, allow these applications to detect when changes occur in Amazon WorkDocs. Then, the application can take the necessary actions and replace the target file. If the target file violates the policy, the application can also choose to email the user. eDiscovery/Analytics: General administrative applications are supported, such as eDiscovery and analytics. These applications can choose to mimic or record the actions in an Amazon WorkDocs site, along with AWS CloudTrail, to replicate data for eDiscovery, backup, or analytical applications. All Amazon WorkDocs API actions are Amazon authenticated and certificate-signed. They not only require the use of the AWS SDK, but also allow for the exclusive use of IAM users and roles to help facilitate access, trust, and permission policies. By creating a role and allowing an IAM user to access the Amazon WorkDocs site, the IAM user gains full administrative visibility into the entire Amazon WorkDocs site (or as set in the IAM policy). This includes, but is not limited to, the ability to modify file permissions and upload any file to any user. This allows developers to perform the three use cases above, as well as give users the ability to grant access on a selective basis using the IAM model.

Other APIs in the same category

Azure Resource Graph

azure.com
Azure Resource Graph API Reference

AmazonMWAA

Amazon Managed Workflows for Apache Airflow This section contains the Amazon Managed Workflows for Apache Airflow (MWAA) API reference documentation. For more information, see What Is Amazon MWAA?.

AWS IoT 1-Click Devices Service

Describes all of the AWS IoT 1-Click device-related API operations for the service.
Also provides sample requests, responses, and errors for the supported web services
protocols.

Update Management

azure.com
APIs for managing software update configurations.

AWS OpsWorks CM

AWS OpsWorks CM AWS OpsWorks for configuration management (CM) is a service that runs and manages configuration management servers. You can use AWS OpsWorks CM to create and manage AWS OpsWorks for Chef Automate and AWS OpsWorks for Puppet Enterprise servers, and add or remove nodes for the servers to manage. Glossary of terms Server : A configuration management server that can be highly-available. The configuration management server runs on an Amazon Elastic Compute Cloud (EC2) instance, and may use various other AWS services, such as Amazon Relational Database Service (RDS) and Elastic Load Balancing. A server is a generic abstraction over the configuration manager that you want to use, much like Amazon RDS. In AWS OpsWorks CM, you do not start or stop servers. After you create servers, they continue to run until they are deleted. Engine : The engine is the specific configuration manager that you want to use. Valid values in this release include ChefAutomate and Puppet. Backup : This is an application-level backup of the data that the configuration manager stores. AWS OpsWorks CM creates an S3 bucket for backups when you launch the first server. A backup maintains a snapshot of a server's configuration-related attributes at the time the backup starts. Events : Events are always related to a server. Events are written during server creation, when health checks run, when backups are created, when system maintenance is performed, etc. When you delete a server, the server's events are also deleted. Account attributes : Every account has attributes that are assigned in the AWS OpsWorks CM database. These attributes store information about configuration limits (servers, backups, etc.) and your customer account. Endpoints AWS OpsWorks CM supports the following endpoints, all HTTPS. You must connect to one of the following endpoints. Your servers can only be accessed or managed within the endpoint in which they are created. opsworks-cm.us-east-1.amazonaws.com opsworks-cm.us-east-2.amazonaws.com opsworks-cm.us-west-1.amazonaws.com opsworks-cm.us-west-2.amazonaws.com opsworks-cm.ap-northeast-1.amazonaws.com opsworks-cm.ap-southeast-1.amazonaws.com opsworks-cm.ap-southeast-2.amazonaws.com opsworks-cm.eu-central-1.amazonaws.com opsworks-cm.eu-west-1.amazonaws.com For more information, see AWS OpsWorks endpoints and quotas in the AWS General Reference. Throttling limits All API operations allow for five requests per second with a burst of 10 requests per second.

AWS Glue DataBrew

Glue DataBrew is a visual, cloud-scale data-preparation service. DataBrew simplifies data preparation tasks, targeting data issues that are hard to spot and time-consuming to fix. DataBrew empowers users of all technical levels to visualize the data and perform one-click data transformations, with no coding required.

AWS Organizations

AWS Organizations is a web service that enables you to consolidate your multiple AWS accounts into an organization and centrally manage your accounts and their resources. This guide provides descriptions of the Organizations operations. For more information about using this service, see the AWS Organizations User Guide. Support and feedback for AWS Organizations We welcome your feedback. Send your comments to [email protected] or post your feedback and questions in the AWS Organizations support forum. For more information about the AWS support forums, see Forums Help. Endpoint to call When using the AWS CLI or the AWS SDK For the current release of Organizations, specify the us-east-1 region for all AWS API and AWS CLI calls made from the commercial AWS Regions outside of China. If calling from one of the AWS Regions in China, then specify cn-northwest-1. You can do this in the AWS CLI by using these parameters and commands: Use the following parameter with each command to specify both the endpoint and its region: --endpoint-url https://organizations.us-east-1.amazonaws.com (from commercial AWS Regions outside of China) or --endpoint-url https://organizations.cn-northwest-1.amazonaws.com.cn (from AWS Regions in China) Use the default endpoint, but configure your default region with this command: aws configure set default.region us-east-1 (from commercial AWS Regions outside of China) or aws configure set default.region cn-northwest-1 (from AWS Regions in China) Use the following parameter with each command to specify the endpoint: --region us-east-1 (from commercial AWS Regions outside of China) or --region cn-northwest-1 (from AWS Regions in China) Recording API Requests AWS Organizations supports AWS CloudTrail, a service that records AWS API calls for your AWS account and delivers log files to an Amazon S3 bucket. By using information collected by AWS CloudTrail, you can determine which requests the Organizations service received, who made the request and when, and so on. For more about AWS Organizations and its support for AWS CloudTrail, see Logging AWS Organizations Events with AWS CloudTrail in the AWS Organizations User Guide. To learn more about AWS CloudTrail, including how to turn it on and find your log files, see the AWS CloudTrail User Guide.

Amazon Kinesis Analytics

Amazon Kinesis Data Analytics is a fully managed service that you can use to process and analyze streaming data using Java, SQL, or Scala. The service enables you to quickly author and run Java, SQL, or Scala code against streaming sources to perform time series analytics, feed real-time dashboards, and create real-time metrics.
The Amazon Braket API Reference provides information about the operations and structures supported in Amazon Braket.

FinSpace User Environment Management service

The FinSpace management service provides the APIs for managing the FinSpace environments.

AWS RoboMaker

This section provides documentation for the AWS RoboMaker API operations.

AWS IoT Data Plane

IoT data IoT data enables secure, bi-directional communication between Internet-connected things (such as sensors, actuators, embedded devices, or smart appliances) and the Amazon Web Services cloud. It implements a broker for applications and things to publish messages over HTTP (Publish) and retrieve, update, and delete shadows. A shadow is a persistent representation of your things and their state in the Amazon Web Services cloud. Find the endpoint address for actions in IoT data by running this CLI command: aws iot describe-endpoint --endpoint-type iot:Data-ATS The service name used by Amazon Web ServicesSignature Version 4 to sign requests is: iotdevicegateway.