Mock sample for your project: AWS Application Discovery Service API

Integrate with "AWS Application Discovery Service API" from amazonaws.com in no time with Mockoon's ready to use mock sample

AWS Application Discovery Service

amazonaws.com

Version: 2015-11-01


Use this API in your project

Integrate third-party APIs faster by using "AWS Application Discovery Service API" ready-to-use mock sample. Mocking this API will help you accelerate your development lifecycles and improves your integration tests' quality and reliability by accounting for random failures, slow response time, etc.
It also helps reduce your dependency on third-party APIs: no more accounts to create, API keys to provision, accesses to configure, unplanned downtime, etc.

Description

AWS Application Discovery Service AWS Application Discovery Service helps you plan application migration projects. It automatically identifies servers, virtual machines (VMs), and network dependencies in your on-premises data centers. For more information, see the AWS Application Discovery Service FAQ. Application Discovery Service offers three ways of performing discovery and collecting data about your on-premises servers: Agentless discovery is recommended for environments that use VMware vCenter Server. This mode doesn't require you to install an agent on each host. It does not work in non-VMware environments. Agentless discovery gathers server information regardless of the operating systems, which minimizes the time required for initial on-premises infrastructure assessment. Agentless discovery doesn't collect information about network dependencies, only agent-based discovery collects that information. Agent-based discovery collects a richer set of data than agentless discovery by using the AWS Application Discovery Agent, which you install on one or more hosts in your data center. The agent captures infrastructure and application information, including an inventory of running processes, system performance information, resource utilization, and network dependencies. The information collected by agents is secured at rest and in transit to the Application Discovery Service database in the cloud. AWS Partner Network (APN) solutions integrate with Application Discovery Service, enabling you to import details of your on-premises environment directly into Migration Hub without using the discovery connector or discovery agent. Third-party application discovery tools can query AWS Application Discovery Service, and they can write to the Application Discovery Service database using the public API. In this way, you can import data into Migration Hub and view it, so that you can associate applications with servers and track migrations. Recommendations We recommend that you use agent-based discovery for non-VMware environments, and whenever you want to collect information about network dependencies. You can run agent-based and agentless discovery simultaneously. Use agentless discovery to complete the initial infrastructure assessment quickly, and then install agents on select hosts to collect additional information. Working With This Guide This API reference provides descriptions, syntax, and usage examples for each of the actions and data types for Application Discovery Service. The topic for each action shows the API request parameters and the response. Alternatively, you can use one of the AWS SDKs to access an API that is tailored to the programming language or platform that you're using. For more information, see AWS SDKs. Remember that you must set your Migration Hub home region before you call any of these APIs. You must make API calls for write actions (create, notify, associate, disassociate, import, or put) while in your home region, or a HomeRegionNotSetException error is returned. API calls for read actions (list, describe, stop, and delete) are permitted outside of your home region. Although it is unlikely, the Migration Hub home region could change. If you call APIs outside the home region, an InvalidInputException is returned. You must call GetHomeRegion to obtain the latest Migration Hub home region. This guide is intended for use with the AWS Application Discovery Service User Guide. All data is handled according to the AWS Privacy Policy. You can operate Application Discovery Service offline to inspect collected data before it is shared with the service.

Other APIs by amazonaws.com

Amazon Pinpoint SMS and Voice Service

Pinpoint SMS and Voice Messaging public facing APIs

Amazon Simple Email Service

Amazon Simple Email Service This document contains reference information for the Amazon Simple Email Service (Amazon SES) API, version 2010-12-01. This document is best used in conjunction with the Amazon SES Developer Guide. For a list of Amazon SES endpoints to use in service requests, see Regions and Amazon SES in the Amazon SES Developer Guide.

Amazon CloudHSM

AWS CloudHSM Service This is documentation for AWS CloudHSM Classic. For more information, see AWS CloudHSM Classic FAQs, the AWS CloudHSM Classic User Guide, and the AWS CloudHSM Classic API Reference. For information about the current version of AWS CloudHSM, see AWS CloudHSM, the AWS CloudHSM User Guide, and the AWS CloudHSM API Reference.

AWS CloudTrail

CloudTrail This is the CloudTrail API Reference. It provides descriptions of actions, data types, common parameters, and common errors for CloudTrail. CloudTrail is a web service that records Amazon Web Services API calls for your Amazon Web Services account and delivers log files to an Amazon S3 bucket. The recorded information includes the identity of the user, the start time of the Amazon Web Services API call, the source IP address, the request parameters, and the response elements returned by the service. As an alternative to the API, you can use one of the Amazon Web Services SDKs, which consist of libraries and sample code for various programming languages and platforms (Java, Ruby, .NET, iOS, Android, etc.). The SDKs provide programmatic access to CloudTrail. For example, the SDKs handle cryptographically signing requests, managing errors, and retrying requests automatically. For more information about the Amazon Web Services SDKs, including how to download and install them, see Tools to Build on Amazon Web Services. See the CloudTrail User Guide for information about the data that is included with each Amazon Web Services API call listed in the log files.

Amazon CloudDirectory

Amazon Cloud Directory Amazon Cloud Directory is a component of the AWS Directory Service that simplifies the development and management of cloud-scale web, mobile, and IoT applications. This guide describes the Cloud Directory operations that you can call programmatically and includes detailed information on data types and errors. For information about Cloud Directory features, see AWS Directory Service and the Amazon Cloud Directory Developer Guide.

Application Auto Scaling

With Application Auto Scaling, you can configure automatic scaling for the following resources: Amazon AppStream 2.0 fleets Amazon Aurora Replicas Amazon Comprehend document classification and entity recognizer endpoints Amazon DynamoDB tables and global secondary indexes throughput capacity Amazon ECS services Amazon ElastiCache for Redis clusters (replication groups) Amazon EMR clusters Amazon Keyspaces (for Apache Cassandra) tables Lambda function provisioned concurrency Amazon Managed Streaming for Apache Kafka broker storage Amazon SageMaker endpoint variants Spot Fleet (Amazon EC2) requests Custom resources provided by your own applications or services API Summary The Application Auto Scaling service API includes three key sets of actions: Register and manage scalable targets - Register Amazon Web Services or custom resources as scalable targets (a resource that Application Auto Scaling can scale), set minimum and maximum capacity limits, and retrieve information on existing scalable targets. Configure and manage automatic scaling - Define scaling policies to dynamically scale your resources in response to CloudWatch alarms, schedule one-time or recurring scaling actions, and retrieve your recent scaling activity history. Suspend and resume scaling - Temporarily suspend and later resume automatic scaling by calling the RegisterScalableTarget API action for any Application Auto Scaling scalable target. You can suspend and resume (individually or in combination) scale-out activities that are triggered by a scaling policy, scale-in activities that are triggered by a scaling policy, and scheduled scaling. To learn more about Application Auto Scaling, including information about granting IAM users required permissions for Application Auto Scaling actions, see the Application Auto Scaling User Guide.

AWS Resource Access Manager

This is the Resource Access Manager API Reference. This documentation provides descriptions and syntax for each of the actions and data types in RAM. RAM is a service that helps you securely share your Amazon Web Services resources across Amazon Web Services accounts and within your organization or organizational units (OUs) in Organizations. For supported resource types, you can also share resources with IAM roles and IAM users. If you have multiple Amazon Web Services accounts, you can use RAM to share those resources with other accounts. To learn more about RAM, see the following resources: Resource Access Manager product page Resource Access Manager User Guide

AWS License Manager

AWS License Manager AWS License Manager makes it easier to manage licenses from software vendors across multiple AWS accounts and on-premises servers.

AWS Elemental MediaLive

API for AWS Elemental MediaLive

Amazon Chime

The Amazon Chime API (application programming interface) is designed for developers to perform key tasks, such as creating and managing Amazon Chime accounts, users, and Voice Connectors. This guide provides detailed information about the Amazon Chime API, including operations, types, inputs and outputs, and error codes. It also includes some server-side API actions to use with the Amazon Chime SDK. For more information about the Amazon Chime SDK, see Using the Amazon Chime SDK in the Amazon Chime Developer Guide. You can use an AWS SDK, the AWS Command Line Interface (AWS CLI), or the REST API to make API calls. We recommend using an AWS SDK or the AWS CLI. Each API operation includes links to information about using it with a language-specific AWS SDK or the AWS CLI. Using an AWS SDK You don't need to write code to calculate a signature for request authentication. The SDK clients authenticate your requests by using access keys that you provide. For more information about AWS SDKs, see the AWS Developer Center. Using the AWS CLI Use your access keys with the AWS CLI to make API calls. For information about setting up the AWS CLI, see Installing the AWS Command Line Interface in the AWS Command Line Interface User Guide. For a list of available Amazon Chime commands, see the Amazon Chime commands in the AWS CLI Command Reference. Using REST APIs If you use REST to make API calls, you must authenticate your request by providing a signature. Amazon Chime supports signature version 4. For more information, see Signature Version 4 Signing Process in the Amazon Web Services General Reference. When making REST API calls, use the service name chime and REST endpoint https://service.chime.aws.amazon.com. Administrative permissions are controlled using AWS Identity and Access Management (IAM). For more information, see Identity and Access Management for Amazon Chime in the Amazon Chime Administration Guide.

Amazon Comprehend

Amazon Comprehend is an AWS service for gaining insight into the content of documents. Use these actions to determine the topics contained in your documents, the topics they discuss, the predominant sentiment expressed in them, the predominant language used, and more.

Amazon AppIntegrations Service

The Amazon AppIntegrations service enables you to configure and reuse connections to external applications. For information about how you can use external applications with Amazon Connect, see Set up pre-built integrations in the Amazon Connect Administrator Guide.

Other APIs in the same category

FabricAdminClient

azure.com
Operation status operation endpoints and objects.

CdnManagementClient

azure.com
Use these APIs to manage Azure CDN resources through the Azure Resource Manager. You must make sure that requests made to these resources are secure.

SqlManagementClient

azure.com
The Azure SQL Database management API provides a RESTful set of web APIs that interact with Azure SQL Database services to manage your databases. The API enables users to create, retrieve, update, and delete databases, servers, and other entities.

Azure Media Services

azure.com
This Swagger was generated by the API Framework.

GuestConfiguration

azure.com

ContainerRegistryManagementClient

azure.com

Linode API

Introduction
The Linode API provides the ability to programmatically manage the full
range of Linode products and services.
This reference is designed to assist application developers and system
administrators. Each endpoint includes descriptions, request syntax, and
examples using standard HTTP requests. Response data is returned in JSON
format.
This document was generated from our OpenAPI Specification. See the
OpenAPI website for more information.
Download the Linode OpenAPI Specification.
Changelog
View our Changelog to see release
notes on all changes made to our API.
Access and Authentication
Some endpoints are publicly accessible without requiring authentication.
All endpoints affecting your Account, however, require either a Personal
Access Token or OAuth authentication (when using third-party
applications).
Personal Access Token
The easiest way to access the API is with a Personal Access Token (PAT)
generated from the
Linode Cloud Manager or
the Create Personal Access Token endpoint.
All scopes for the OAuth security model (defined below) apply to this
security model as well.
Authentication
| Security Scheme Type: | HTTP |
|-----------------------|------|
| HTTP Authorization Scheme | bearer |
OAuth
If you only need to access the Linode API for personal use,
we recommend that you create a personal access token.
If you're designing an application that can authenticate with an arbitrary Linode user, then
you should use the OAuth 2.0 workflows presented in this section.
For a more detailed example of an OAuth 2.0 implementation, see our guide on How to Create an OAuth App with the Linode Python API Library.
Before you implement OAuth in your application, you first need to create an OAuth client. You can do this with the Linode API or via the Cloud Manager:
When creating the client, you'll supply a label and a redirect_uri (referred to as the Callback URL in the Cloud Manager).
The response from this endpoint will give you a client_id and a secret.
Clients can be public or private, and are private by default. You can choose to make the client public when it is created.
A private client is used with applications which can securely store the client secret (that is, the secret returned to you when you first created the client). For example, an application running on a secured server that only the developer has access to would use a private OAuth client. This is also called a confidential client in some OAuth documentation.
A public client is used with applications where the client secret is not guaranteed to be secure. For example, a native app running on a user's computer may not be able to keep the client secret safe, as a user could potentially inspect the source of the application. So, native apps or apps that run in a user's browser should use a public client.
Public and private clients follow different workflows, as described below.
OAuth Workflow
The OAuth workflow is a series of exchanges between your third-party app and Linode. The workflow is used
to authenticate a user before an application can start making API calls on the user's behalf.
Notes:
With respect to the diagram in section 1.2 of RFC 6749, login.linode.com (referred to in this section as the login server)
is the Resource Owner and the Authorization Server; api.linode.com (referred to here as the api server) is the Resource Server.
The OAuth spec refers to the private and public workflows listed below as the authorization code flow and implicit flow.
| PRIVATE WORKFLOW | PUBLIC WORKFLOW |
|------------------|------------------|
| 1. The user visits the application's website and is directed to login with Linode. | 1. The user visits the application's website and is directed to login with Linode. |
| 2. Your application then redirects the user to Linode's login server with the client application's clientid and requested OAuth scope, which should appear in the URL of the login page. | 2. Your application then redirects the user to Linode's login server with the client application's clientid and requested OAuth scope, which should appear in the URL of the login page. |
| 3. The user logs into the login server with their username and password. | 3. The user logs into the login server with their username and password. |
| 4. The login server redirects the user to the specificed redirect URL with a temporary authorization code (exchange code) in the URL. | 4. The login server redirects the user back to your application with an OAuth accesstoken embedded in the redirect URL's hash. This is temporary and expires in two hours. No refreshtoken is issued. Therefore, once the access_token expires, a new one will need to be issued by having the user log in again. |
| 5. The application issues a POST request (see below) to the login server with the exchange code, clientid, and the client application's clientsecret. | |
| 6. The login server responds to the client application with a new OAuth accesstoken and refreshtoken. The access_token is set to expire in two hours. | |
| 7. The refreshtoken can be used by contacting the login server with the clientid, clientsecret, granttype, and refreshtoken to get a new OAuth accesstoken and refreshtoken. The new accesstoken is good for another two hours, and the new refresh_token, can be used to extend the session again by this same method. | |
OAuth Private Workflow - Additional Details
The following information expands on steps 5 through 7 of the private workflow:
Once the user has logged into Linode and you have received an exchange code,
you will need to trade that exchange code for an accesstoken and refreshtoken. You
do this by making an HTTP POST request to the following address:
Rate Limiting
With the Linode API, you can make up to 1,600 general API requests every two minutes per user as
determined by IP adddress or by OAuth token. Additionally, there are endpoint specfic limits defined below.
Note: There may be rate limiting applied at other levels outside of the API, for example, at the load balancer.
/stats endpoints have their own dedicated limits of 100 requests per minute per user.
These endpoints are:
View Linode Statistics
View Linode Statistics (year/month)
View NodeBalancer Statistics
List Managed Stats
Object Storage endpoints have a dedicated limit of 750 requests per second per user.
The Object Storage endpoints are:
Object Storage Endpoints
Opening Support Tickets has a dedicated limit of 2 requests per minute per user.
That endpoint is:
Open Support Ticket
Accepting Service Transfers has a dedicated limit of 2 requests per minute per user.
That endpoint is:
Service Transfer Accept
CLI (Command Line Interface)
The Linode CLI allows you to easily
work with the API using intuitive and simple syntax. It requires a
Personal Access Token
for authentication, and gives you access to all of the features and functionality
of the Linode API that are documented here with CLI examples.
Endpoints that do not have CLI examples are currently unavailable through the CLI, but
can be accessed via other methods such as Shell commands and other third-party applications.

DiskResourceProviderClient

azure.com
The Disk Resource Provider Client.

Azure Log Analytics

azure.com
Azure Log Analytics API reference

Service Fabric Client APIs

azure.com
Service Fabric REST Client APIs allows management of Service Fabric clusters, applications and services.

MariaDBManagementClient

azure.com
The Microsoft Azure management API provides create, read, update, and delete functionality for Azure MariaDB resources including servers, databases, firewall rules, VNET rules, security alert policies, log files and configurations with new business model.

Azure Reservation

azure.com
Microsoft Azure Quota Resource Provider.