Mock sample for your project: Amazon WorkSpaces API

Integrate with "Amazon WorkSpaces API" from amazonaws.com in no time with Mockoon's ready to use mock sample

Amazon WorkSpaces

amazonaws.com

Version: 2015-04-08


Use this API in your project

Start working with "Amazon WorkSpaces API" right away by using this ready-to-use mock sample. API mocking can greatly speed up your application development by removing all the tedious tasks or issues: API key provisioning, account creation, unplanned downtime, etc.
It also helps reduce your dependency on third-party APIs and improves your integration tests' quality and reliability by accounting for random failures, slow response time, etc.

Description

Amazon WorkSpaces Service Amazon WorkSpaces enables you to provision virtual, cloud-based Microsoft Windows and Amazon Linux desktops for your users.

Other APIs by amazonaws.com

AWS RoboMaker

This section provides documentation for the AWS RoboMaker API operations.

Amazon SimpleDB

Amazon SimpleDB is a web service providing the core database functions of data indexing and querying in the cloud. By offloading the time and effort associated with building and operating a web-scale database, SimpleDB provides developers the freedom to focus on application development. A traditional, clustered relational database requires a sizable upfront capital outlay, is complex to design, and often requires extensive and repetitive database administration. Amazon SimpleDB is dramatically simpler, requiring no schema, automatically indexing your data and providing a simple API for storage and access. This approach eliminates the administrative burden of data modeling, index maintenance, and performance tuning. Developers gain access to this functionality within Amazon's proven computing environment, are able to scale instantly, and pay only for what they use. Visit http://aws.amazon.com/simpledb/ for more information.

Amazon Glacier

Amazon S3 Glacier (Glacier) is a storage solution for "cold data." Glacier is an extremely low-cost storage service that provides secure, durable, and easy-to-use storage for data backup and archival. With Glacier, customers can store their data cost effectively for months, years, or decades. Glacier also enables customers to offload the administrative burdens of operating and scaling storage to AWS, so they don't have to worry about capacity planning, hardware provisioning, data replication, hardware failure and recovery, or time-consuming hardware migrations. Glacier is a great storage choice when low storage cost is paramount and your data is rarely retrieved. If your application requires fast or frequent access to your data, consider using Amazon S3. For more information, see Amazon Simple Storage Service (Amazon S3). You can store any kind of data in any format. There is no maximum limit on the total amount of data you can store in Glacier. If you are a first-time user of Glacier, we recommend that you begin by reading the following sections in the Amazon S3 Glacier Developer Guide : What is Amazon S3 Glacier - This section of the Developer Guide describes the underlying data model, the operations it supports, and the AWS SDKs that you can use to interact with the service. Getting Started with Amazon S3 Glacier - The Getting Started section walks you through the process of creating a vault, uploading archives, creating jobs to download archives, retrieving the job output, and deleting archives.

Amazon Import/Export Snowball

AWS Snow Family is a petabyte-scale data transport solution that uses secure devices to transfer large amounts of data between your on-premises data centers and Amazon Simple Storage Service (Amazon S3). The Snow commands described here provide access to the same functionality that is available in the AWS Snow Family Management Console, which enables you to create and manage jobs for a Snow device. To transfer data locally with a Snow device, you'll need to use the Snowball Edge client or the Amazon S3 API Interface for Snowball or AWS OpsHub for Snow Family. For more information, see the User Guide.

AWS Signer

AWS Signer is a fully managed code signing service to help you ensure the trust and integrity of your code. AWS Signer supports the following applications: With code signing for AWS Lambda, you can sign AWS Lambda deployment packages. Integrated support is provided for Amazon S3, Amazon CloudWatch, and AWS CloudTrail. In order to sign code, you create a signing profile and then use Signer to sign Lambda zip files in S3. With code signing for IoT, you can sign code for any IoT device that is supported by AWS. IoT code signing is available for Amazon FreeRTOS and AWS IoT Device Management, and is integrated with AWS Certificate Manager (ACM). In order to sign code, you import a third-party code signing certificate using ACM, and use that to sign updates in Amazon FreeRTOS and AWS IoT Device Management. For more information about AWS Signer, see the AWS Signer Developer Guide.

AWS Cost and Usage Report Service

The AWS Cost and Usage Report API enables you to programmatically create, query, and delete AWS Cost and Usage report definitions. AWS Cost and Usage reports track the monthly AWS costs and usage associated with your AWS account. The report contains line items for each unique combination of AWS product, usage type, and operation that your AWS account uses. You can configure the AWS Cost and Usage report to show only the data that you want, using the AWS Cost and Usage API. Service Endpoint The AWS Cost and Usage Report API provides the following endpoint: cur.us-east-1.amazonaws.com

Amazon Timestream Query

AWS CodePipeline

AWS CodePipeline Overview This is the AWS CodePipeline API Reference. This guide provides descriptions of the actions and data types for AWS CodePipeline. Some functionality for your pipeline can only be configured through the API. For more information, see the AWS CodePipeline User Guide. You can use the AWS CodePipeline API to work with pipelines, stages, actions, and transitions. Pipelines are models of automated release processes. Each pipeline is uniquely named, and consists of stages, actions, and transitions. You can work with pipelines by calling: CreatePipeline, which creates a uniquely named pipeline. DeletePipeline, which deletes the specified pipeline. GetPipeline, which returns information about the pipeline structure and pipeline metadata, including the pipeline Amazon Resource Name (ARN). GetPipelineExecution, which returns information about a specific execution of a pipeline. GetPipelineState, which returns information about the current state of the stages and actions of a pipeline. ListActionExecutions, which returns action-level details for past executions. The details include full stage and action-level details, including individual action duration, status, any errors that occurred during the execution, and input and output artifact location details. ListPipelines, which gets a summary of all of the pipelines associated with your account. ListPipelineExecutions, which gets a summary of the most recent executions for a pipeline. StartPipelineExecution, which runs the most recent revision of an artifact through the pipeline. StopPipelineExecution, which stops the specified pipeline execution from continuing through the pipeline. UpdatePipeline, which updates a pipeline with edits or changes to the structure of the pipeline. Pipelines include stages. Each stage contains one or more actions that must complete before the next stage begins. A stage results in success or failure. If a stage fails, the pipeline stops at that stage and remains stopped until either a new version of an artifact appears in the source location, or a user takes action to rerun the most recent artifact through the pipeline. You can call GetPipelineState, which displays the status of a pipeline, including the status of stages in the pipeline, or GetPipeline, which returns the entire structure of the pipeline, including the stages of that pipeline. For more information about the structure of stages and actions, see AWS CodePipeline Pipeline Structure Reference. Pipeline stages include actions that are categorized into categories such as source or build actions performed in a stage of a pipeline. For example, you can use a source action to import artifacts into a pipeline from a source such as Amazon S3. Like stages, you do not work with actions directly in most cases, but you do define and interact with actions when working with pipeline operations such as CreatePipeline and GetPipelineState. Valid action categories are: Source Build Test Deploy Approval Invoke Pipelines also include transitions, which allow the transition of artifacts from one stage to the next in a pipeline after the actions in one stage complete. You can work with transitions by calling: DisableStageTransition, which prevents artifacts from transitioning to the next stage in a pipeline. EnableStageTransition, which enables transition of artifacts between stages in a pipeline. Using the API to integrate with AWS CodePipeline For third-party integrators or developers who want to create their own integrations with AWS CodePipeline, the expected sequence varies from the standard API user. To integrate with AWS CodePipeline, developers need to work with the following items: Jobs, which are instances of an action. For example, a job for a source action might import a revision of an artifact from a source. You can work with jobs by calling: AcknowledgeJob, which confirms whether a job worker has received the specified job. GetJobDetails, which returns the details of a job. PollForJobs, which determines whether there are any jobs to act on. PutJobFailureResult, which provides details of a job failure. PutJobSuccessResult, which provides details of a job success. Third party jobs, which are instances of an action created by a partner action and integrated into AWS CodePipeline. Partner actions are created by members of the AWS Partner Network. You can work with third party jobs by calling: AcknowledgeThirdPartyJob, which confirms whether a job worker has received the specified job. GetThirdPartyJobDetails, which requests the details of a job for a partner action. PollForThirdPartyJobs, which determines whether there are any jobs to act on. PutThirdPartyJobFailureResult, which provides details of a job failure. PutThirdPartyJobSuccessResult, which provides details of a job success.

AWS Support

AWS Support The AWS Support API Reference is intended for programmers who need detailed information about the AWS Support operations and data types. You can use the API to manage your support cases programmatically. The AWS Support API uses HTTP methods that return results in JSON format. You must have a Business or Enterprise Support plan to use the AWS Support API. If you call the AWS Support API from an account that does not have a Business or Enterprise Support plan, the SubscriptionRequiredException error message appears. For information about changing your support plan, see AWS Support. The AWS Support service also exposes a set of AWS Trusted Advisor features. You can retrieve a list of checks and their descriptions, get check results, specify checks to refresh, and get the refresh status of checks. The following list describes the AWS Support case management operations: Service names, issue categories, and available severity levels - The DescribeServices and DescribeSeverityLevels operations return AWS service names, service codes, service categories, and problem severity levels. You use these values when you call the CreateCase operation. Case creation, case details, and case resolution - The CreateCase, DescribeCases, DescribeAttachment, and ResolveCase operations create AWS Support cases, retrieve information about cases, and resolve cases. Case communication - The DescribeCommunications, AddCommunicationToCase, and AddAttachmentsToSet operations retrieve and add communications and attachments to AWS Support cases. The following list describes the operations available from the AWS Support service for Trusted Advisor: DescribeTrustedAdvisorChecks returns the list of checks that run against your AWS resources. Using the checkId for a specific check returned by DescribeTrustedAdvisorChecks, you can call DescribeTrustedAdvisorCheckResult to obtain the results for the check that you specified. DescribeTrustedAdvisorCheckSummaries returns summarized results for one or more Trusted Advisor checks. RefreshTrustedAdvisorCheck requests that Trusted Advisor rerun a specified check. DescribeTrustedAdvisorCheckRefreshStatuses reports the refresh status of one or more checks. For authentication of requests, AWS Support uses Signature Version 4 Signing Process. See About the AWS Support API in the AWS Support User Guide for information about how to use this service to create and manage your support cases, and how to call Trusted Advisor for results of checks on your resources.

Amazon Route 53

Amazon Route 53 is a highly available and scalable Domain Name System (DNS) web service.

Amazon Route 53 Domains

Amazon Route 53 API actions let you register domain names and perform related operations.

Amazon Route 53 Resolver

When you create a VPC using Amazon VPC, you automatically get DNS resolution within the VPC from Route 53 Resolver. By default, Resolver answers DNS queries for VPC domain names such as domain names for EC2 instances or Elastic Load Balancing load balancers. Resolver performs recursive lookups against public name servers for all other domain names. You can also configure DNS resolution between your VPC and your network over a Direct Connect or VPN connection: Forward DNS queries from resolvers on your network to Route 53 Resolver DNS resolvers on your network can forward DNS queries to Resolver in a specified VPC. This allows your DNS resolvers to easily resolve domain names for Amazon Web Services resources such as EC2 instances or records in a Route 53 private hosted zone. For more information, see How DNS Resolvers on Your Network Forward DNS Queries to Route 53 Resolver in the Amazon Route 53 Developer Guide. Conditionally forward queries from a VPC to resolvers on your network You can configure Resolver to forward queries that it receives from EC2 instances in your VPCs to DNS resolvers on your network. To forward selected queries, you create Resolver rules that specify the domain names for the DNS queries that you want to forward (such as example.com), and the IP addresses of the DNS resolvers on your network that you want to forward the queries to. If a query matches multiple rules (example.com, acme.example.com), Resolver chooses the rule with the most specific match (acme.example.com) and forwards the query to the IP addresses that you specified in that rule. For more information, see How Route 53 Resolver Forwards DNS Queries from Your VPCs to Your Network in the Amazon Route 53 Developer Guide. Like Amazon VPC, Resolver is Regional. In each Region where you have VPCs, you can choose whether to forward queries from your VPCs to your network (outbound queries), from your network to your VPCs (inbound queries), or both.

Other APIs in the same category

AWS IoT Wireless

AWS IoT Wireless API documentation

Linode API

Introduction
The Linode API provides the ability to programmatically manage the full
range of Linode products and services.
This reference is designed to assist application developers and system
administrators. Each endpoint includes descriptions, request syntax, and
examples using standard HTTP requests. Response data is returned in JSON
format.
This document was generated from our OpenAPI Specification. See the
OpenAPI website for more information.
Download the Linode OpenAPI Specification.
Changelog
View our Changelog to see release
notes on all changes made to our API.
Access and Authentication
Some endpoints are publicly accessible without requiring authentication.
All endpoints affecting your Account, however, require either a Personal
Access Token or OAuth authentication (when using third-party
applications).
Personal Access Token
The easiest way to access the API is with a Personal Access Token (PAT)
generated from the
Linode Cloud Manager or
the Create Personal Access Token endpoint.
All scopes for the OAuth security model (defined below) apply to this
security model as well.
Authentication
| Security Scheme Type: | HTTP |
|-----------------------|------|
| HTTP Authorization Scheme | bearer |
OAuth
If you only need to access the Linode API for personal use,
we recommend that you create a personal access token.
If you're designing an application that can authenticate with an arbitrary Linode user, then
you should use the OAuth 2.0 workflows presented in this section.
For a more detailed example of an OAuth 2.0 implementation, see our guide on How to Create an OAuth App with the Linode Python API Library.
Before you implement OAuth in your application, you first need to create an OAuth client. You can do this with the Linode API or via the Cloud Manager:
When creating the client, you'll supply a label and a redirect_uri (referred to as the Callback URL in the Cloud Manager).
The response from this endpoint will give you a client_id and a secret.
Clients can be public or private, and are private by default. You can choose to make the client public when it is created.
A private client is used with applications which can securely store the client secret (that is, the secret returned to you when you first created the client). For example, an application running on a secured server that only the developer has access to would use a private OAuth client. This is also called a confidential client in some OAuth documentation.
A public client is used with applications where the client secret is not guaranteed to be secure. For example, a native app running on a user's computer may not be able to keep the client secret safe, as a user could potentially inspect the source of the application. So, native apps or apps that run in a user's browser should use a public client.
Public and private clients follow different workflows, as described below.
OAuth Workflow
The OAuth workflow is a series of exchanges between your third-party app and Linode. The workflow is used
to authenticate a user before an application can start making API calls on the user's behalf.
Notes:
With respect to the diagram in section 1.2 of RFC 6749, login.linode.com (referred to in this section as the login server)
is the Resource Owner and the Authorization Server; api.linode.com (referred to here as the api server) is the Resource Server.
The OAuth spec refers to the private and public workflows listed below as the authorization code flow and implicit flow.
| PRIVATE WORKFLOW | PUBLIC WORKFLOW |
|------------------|------------------|
| 1. The user visits the application's website and is directed to login with Linode. | 1. The user visits the application's website and is directed to login with Linode. |
| 2. Your application then redirects the user to Linode's login server with the client application's clientid and requested OAuth scope, which should appear in the URL of the login page. | 2. Your application then redirects the user to Linode's login server with the client application's clientid and requested OAuth scope, which should appear in the URL of the login page. |
| 3. The user logs into the login server with their username and password. | 3. The user logs into the login server with their username and password. |
| 4. The login server redirects the user to the specificed redirect URL with a temporary authorization code (exchange code) in the URL. | 4. The login server redirects the user back to your application with an OAuth accesstoken embedded in the redirect URL's hash. This is temporary and expires in two hours. No refreshtoken is issued. Therefore, once the access_token expires, a new one will need to be issued by having the user log in again. |
| 5. The application issues a POST request (see below) to the login server with the exchange code, clientid, and the client application's clientsecret. | |
| 6. The login server responds to the client application with a new OAuth accesstoken and refreshtoken. The access_token is set to expire in two hours. | |
| 7. The refreshtoken can be used by contacting the login server with the clientid, clientsecret, granttype, and refreshtoken to get a new OAuth accesstoken and refreshtoken. The new accesstoken is good for another two hours, and the new refresh_token, can be used to extend the session again by this same method. | |
OAuth Private Workflow - Additional Details
The following information expands on steps 5 through 7 of the private workflow:
Once the user has logged into Linode and you have received an exchange code,
you will need to trade that exchange code for an accesstoken and refreshtoken. You
do this by making an HTTP POST request to the following address:
Rate Limiting
With the Linode API, you can make up to 1,600 general API requests every two minutes per user as
determined by IP adddress or by OAuth token. Additionally, there are endpoint specfic limits defined below.
Note: There may be rate limiting applied at other levels outside of the API, for example, at the load balancer.
/stats endpoints have their own dedicated limits of 100 requests per minute per user.
These endpoints are:
View Linode Statistics
View Linode Statistics (year/month)
View NodeBalancer Statistics
List Managed Stats
Object Storage endpoints have a dedicated limit of 750 requests per second per user.
The Object Storage endpoints are:
Object Storage Endpoints
Opening Support Tickets has a dedicated limit of 2 requests per minute per user.
That endpoint is:
Open Support Ticket
Accepting Service Transfers has a dedicated limit of 2 requests per minute per user.
That endpoint is:
Service Transfer Accept
CLI (Command Line Interface)
The Linode CLI allows you to easily
work with the API using intuitive and simple syntax. It requires a
Personal Access Token
for authentication, and gives you access to all of the features and functionality
of the Linode API that are documented here with CLI examples.
Endpoints that do not have CLI examples are currently unavailable through the CLI, but
can be accessed via other methods such as Shell commands and other third-party applications.

Personalizer Client

azure.com
Personalizer Service is an Azure Cognitive Service that makes it easy to target content and experiences without complex pre-analysis or cleanup of past data. Given a context and featurized content, the Personalizer Service returns which content item to show to users in rewardActionId. As rewards are sent in response to the use of rewardActionId, the reinforcement learning algorithm will improve the model and improve performance of future rank calls.

NotificationHubsManagementClient

azure.com
Azure NotificationHub client

CognitiveServicesManagementClient

azure.com
Cognitive Services Management Client

Management Groups

azure.com
The Azure Management Groups API enables consolidation of multiple subscriptions/resources into an organizational hierarchy and centrally manage access control, policies, alerting and reporting for those resources.

Azure Bot Service

azure.com
Azure Bot Service is a platform for creating smart conversational agents.

PolicyStatesClient

azure.com

GuestConfiguration

azure.com

NetworkManagementClient

azure.com
The Microsoft Azure Network management API provides a RESTful set of web services that interact with Microsoft Azure Networks service to manage your network resources. The API has entities that capture the relationship between an end user and the Microsoft Azure Networks service.

Security Center

azure.com
API spec for Microsoft.Security (Azure Security Center) resource provider

Azure Reservation

azure.com
This API describe Azure Reservation