Mock sample for your project: Amazon Augmented AI Runtime API

Integrate with "Amazon Augmented AI Runtime API" from amazonaws.com in no time with Mockoon's ready to use mock sample

Amazon Augmented AI Runtime

amazonaws.com

Version: 2019-11-07


Use this API in your project

Integrate third-party APIs faster by using "Amazon Augmented AI Runtime API" ready-to-use mock sample. Mocking this API will help you accelerate your development lifecycles and improves your integration tests' quality and reliability by accounting for random failures, slow response time, etc.
It also helps reduce your dependency on third-party APIs: no more accounts to create, API keys to provision, accesses to configure, unplanned downtime, etc.

Description

Amazon Augmented AI (Amazon A2I) adds the benefit of human judgment to any machine learning application. When an AI application can't evaluate data with a high degree of confidence, human reviewers can take over. This human review is called a human review workflow. To create and start a human review workflow, you need three resources: a worker task template, a flow definition, and a human loop. For information about these resources and prerequisites for using Amazon A2I, see Get Started with Amazon Augmented AI in the Amazon SageMaker Developer Guide. This API reference includes information about API actions and data types that you can use to interact with Amazon A2I programmatically. Use this guide to: Start a human loop with the StartHumanLoop operation when using Amazon A2I with a custom task type. To learn more about the difference between custom and built-in task types, see Use Task Types. To learn how to start a human loop using this API, see Create and Start a Human Loop for a Custom Task Type in the Amazon SageMaker Developer Guide. Manage your human loops. You can list all human loops that you have created, describe individual human loops, and stop and delete human loops. To learn more, see Monitor and Manage Your Human Loop in the Amazon SageMaker Developer Guide. Amazon A2I integrates APIs from various AWS services to create and start human review workflows for those services. To learn how Amazon A2I uses these APIs, see Use APIs in Amazon A2I in the Amazon SageMaker Developer Guide.

Other APIs by amazonaws.com

Amazon AppConfig

AWS AppConfig Use AWS AppConfig, a capability of AWS Systems Manager, to create, manage, and quickly deploy application configurations. AppConfig supports controlled deployments to applications of any size and includes built-in validation checks and monitoring. You can use AppConfig with applications hosted on Amazon EC2 instances, AWS Lambda, containers, mobile applications, or IoT devices. To prevent errors when deploying application configurations, especially for production systems where a simple typo could cause an unexpected outage, AppConfig includes validators. A validator provides a syntactic or semantic check to ensure that the configuration you want to deploy works as intended. To validate your application configuration data, you provide a schema or a Lambda function that runs against the configuration. The configuration deployment or update can only proceed when the configuration data is valid. During a configuration deployment, AppConfig monitors the application to ensure that the deployment is successful. If the system encounters an error, AppConfig rolls back the change to minimize impact for your application users. You can configure a deployment strategy for each application or environment that includes deployment criteria, including velocity, bake time, and alarms to monitor. Similar to error monitoring, if a deployment triggers an alarm, AppConfig automatically rolls back to the previous version. AppConfig supports multiple use cases. Here are some examples. Application tuning : Use AppConfig to carefully introduce changes to your application that can only be tested with production traffic. Feature toggle : Use AppConfig to turn on new features that require a timely deployment, such as a product launch or announcement. Allow list : Use AppConfig to allow premium subscribers to access paid content. Operational issues : Use AppConfig to reduce stress on your application when a dependency or other external factor impacts the system. This reference is intended to be used with the AWS AppConfig User Guide.

Amazon Kinesis

Amazon Kinesis Data Streams Service API Reference Amazon Kinesis Data Streams is a managed service that scales elastically for real-time processing of streaming big data.

Route53 Recovery Cluster

Welcome to the Amazon Route 53 Application Recovery Controller API Reference Guide for Recovery Control Data Plane . Recovery control in Route 53 Application Recovery Controller includes extremely reliable routing controls that enable you to recover applications by rerouting traffic, for example, across Availability Zones or AWS Regions. Routing controls are simple on/off switches hosted on a cluster. A cluster is a set of five redundant regional endpoints against which you can execute API calls to update or get the state of routing controls. You use routing controls to failover traffic to recover your application across Availability Zones or Regions. This API guide includes information about how to get and update routing control states in Route 53 Application Recovery Controller. For more information about Route 53 Application Recovery Controller, see the following: You can create clusters, routing controls, and control panels by using the control plane API for Recovery Control. For more information, see Amazon Route 53 Application Recovery Controller Recovery Control API Reference. Route 53 Application Recovery Controller also provides continuous readiness checks to ensure that your applications are scaled to handle failover traffic. For more information about the related API actions, see Amazon Route 53 Application Recovery Controller Recovery Readiness API Reference. For more information about creating resilient applications and preparing for recovery readiness with Route 53 Application Recovery Controller, see the Amazon Route 53 Application Recovery Controller Developer Guide.

AWS License Manager

AWS License Manager AWS License Manager makes it easier to manage licenses from software vendors across multiple AWS accounts and on-premises servers.

Amazon Kinesis Video Streams

AWS Transfer Family

Amazon Web Services Transfer Family is a fully managed service that enables the transfer of files over the File Transfer Protocol (FTP), File Transfer Protocol over SSL (FTPS), or Secure Shell (SSH) File Transfer Protocol (SFTP) directly into and out of Amazon Simple Storage Service (Amazon S3). Amazon Web Services helps you seamlessly migrate your file transfer workflows to Amazon Web Services Transfer Family by integrating with existing authentication systems, and providing DNS routing with Amazon Route 53 so nothing changes for your customers and partners, or their applications. With your data in Amazon S3, you can use it with Amazon Web Services services for processing, analytics, machine learning, and archiving. Getting started with Amazon Web Services Transfer Family is easy since there is no infrastructure to buy and set up.

AWS SSO OIDC

AWS Single Sign-On (SSO) OpenID Connect (OIDC) is a web service that enables a client (such as AWS CLI or a native application) to register with AWS SSO. The service also enables the client to fetch the user’s access token upon successful authentication and authorization with AWS SSO. This service conforms with the OAuth 2.0 based implementation of the device authorization grant standard ( https://tools.ietf.org/html/rfc8628). For general information about AWS SSO, see What is AWS Single Sign-On? in the AWS SSO User Guide. This API reference guide describes the AWS SSO OIDC operations that you can call programatically and includes detailed information on data types and errors. AWS provides SDKs that consist of libraries and sample code for various programming languages and platforms such as Java, Ruby, .Net, iOS, and Android. The SDKs provide a convenient way to create programmatic access to AWS SSO and other AWS services. For more information about the AWS SDKs, including how to download and install them, see Tools for Amazon Web Services.

Amazon Kinesis Video Signaling Channels

Kinesis Video Streams Signaling Service is a intermediate service that establishes a communication channel for discovering peers, transmitting offers and answers in order to establish peer-to-peer connection in webRTC technology.

AWS Route53 Recovery Readiness

AWS Route53 Recovery Readiness

AWS WAFV2

WAF This is the latest version of the WAF API, released in November, 2019. The names of the entities that you use to access this API, like endpoints and namespaces, all have the versioning information added, like "V2" or "v2", to distinguish from the prior version. We recommend migrating your resources to this version, because it has a number of significant improvements. If you used WAF prior to this release, you can't use this WAFV2 API to access any WAF resources that you created before. You can access your old rules, web ACLs, and other WAF resources only through the WAF Classic APIs. The WAF Classic APIs have retained the prior names, endpoints, and namespaces. For information, including how to migrate your WAF resources to this version, see the WAF Developer Guide. WAF is a web application firewall that lets you monitor the HTTP and HTTPS requests that are forwarded to Amazon CloudFront, an Amazon API Gateway REST API, an Application Load Balancer, or an AppSync GraphQL API. WAF also lets you control access to your content. Based on conditions that you specify, such as the IP addresses that requests originate from or the values of query strings, the Amazon API Gateway REST API, CloudFront distribution, the Application Load Balancer, or the AppSync GraphQL API responds to requests either with the requested content or with an HTTP 403 status code (Forbidden). You also can configure CloudFront to return a custom error page when a request is blocked. This API guide is for developers who need detailed information about WAF API actions, data types, and errors. For detailed information about WAF features and an overview of how to use WAF, see the WAF Developer Guide. You can make calls using the endpoints listed in WAF endpoints and quotas. For regional applications, you can use any of the endpoints in the list. A regional application can be an Application Load Balancer (ALB), an Amazon API Gateway REST API, or an AppSync GraphQL API. For Amazon CloudFront applications, you must use the API endpoint listed for US East (N. Virginia): us-east-1. Alternatively, you can use one of the Amazon Web Services SDKs to access an API that's tailored to the programming language or platform that you're using. For more information, see Amazon Web Services SDKs. We currently provide two versions of the WAF API: this API and the prior versions, the classic WAF APIs. This new API provides the same functionality as the older versions, with the following major improvements: You use one API for both global and regional applications. Where you need to distinguish the scope, you specify a Scope parameter and set it to CLOUDFRONT or REGIONAL. You can define a web ACL or rule group with a single call, and update it with a single call. You define all rule specifications in JSON format, and pass them to your rule group or web ACL calls. The limits WAF places on the use of rules more closely reflects the cost of running each type of rule. Rule groups include capacity settings, so you know the maximum cost of a rule group when you use it.

AWS MediaTailor

Use the AWS Elemental MediaTailor SDKs and CLI to configure scalable ad insertion and linear channels. With MediaTailor, you can assemble existing content into a linear stream and serve targeted ads to viewers while maintaining broadcast quality in over-the-top (OTT) video applications. For information about using the service, including detailed information about the settings covered in this guide, see the AWS Elemental MediaTailor User Guide. Through the SDKs and the CLI you manage AWS Elemental MediaTailor configurations and channels the same as you do through the console. For example, you specify ad insertion behavior and mapping information for the origin server and the ad decision server (ADS).

Amazon MemoryDB

MemoryDB for Redis is a fully managed, Redis-compatible, in-memory database that delivers ultra-fast performance and Multi-AZ durability for modern applications built using microservices architectures. MemoryDB stores the entire database in-memory, enabling low latency and high throughput data access. It is compatible with Redis, a popular open source data store, enabling you to leverage Redis’ flexible and friendly data structures, APIs, and commands.

Other APIs in the same category

ApiManagementClient

azure.com
Use these REST APIs for performing operations on Cache entity in your Azure API Management deployment. Azure API Management also allows for caching responses in an external Azure Cache for Redis. For more information refer to External Redis Cache in ApiManagement.

ComputeManagementConvenienceClient

azure.com

MariaDBManagementClient

azure.com
The Microsoft Azure management API provides create, read, update, and delete functionality for Azure MariaDB resources including servers, databases, firewall rules, VNET rules, log files and configurations with new business model.

Azure Data Migration Service Resource Provider

azure.com
The Data Migration Service helps people migrate their data from on-premise database servers to Azure, or from older database software to newer software. The service manages one or more workers that are joined to a customer's virtual network, which is assumed to provide connectivity to their databases. To avoid frequent updates to the resource provider, data migration tasks are implemented by the resource provider in a generic way as task resources, each of which has a task type (which identifies the type of work to run), input, and output. The client is responsible for providing appropriate task type and inputs, which will be passed through unexamined to the machines that implement the functionality, and for understanding the output, which is passed back unexamined to the client.

SqlManagementClient

azure.com
The Azure SQL Database management API provides a RESTful set of web APIs that interact with Azure SQL Database services to manage your databases. The API enables users to create, retrieve, update, and delete databases, servers, and other entities.

Linode API

Introduction
The Linode API provides the ability to programmatically manage the full
range of Linode products and services.
This reference is designed to assist application developers and system
administrators. Each endpoint includes descriptions, request syntax, and
examples using standard HTTP requests. Response data is returned in JSON
format.
This document was generated from our OpenAPI Specification. See the
OpenAPI website for more information.
Download the Linode OpenAPI Specification.
Changelog
View our Changelog to see release
notes on all changes made to our API.
Access and Authentication
Some endpoints are publicly accessible without requiring authentication.
All endpoints affecting your Account, however, require either a Personal
Access Token or OAuth authentication (when using third-party
applications).
Personal Access Token
The easiest way to access the API is with a Personal Access Token (PAT)
generated from the
Linode Cloud Manager or
the Create Personal Access Token endpoint.
All scopes for the OAuth security model (defined below) apply to this
security model as well.
Authentication
| Security Scheme Type: | HTTP |
|-----------------------|------|
| HTTP Authorization Scheme | bearer |
OAuth
If you only need to access the Linode API for personal use,
we recommend that you create a personal access token.
If you're designing an application that can authenticate with an arbitrary Linode user, then
you should use the OAuth 2.0 workflows presented in this section.
For a more detailed example of an OAuth 2.0 implementation, see our guide on How to Create an OAuth App with the Linode Python API Library.
Before you implement OAuth in your application, you first need to create an OAuth client. You can do this with the Linode API or via the Cloud Manager:
When creating the client, you'll supply a label and a redirect_uri (referred to as the Callback URL in the Cloud Manager).
The response from this endpoint will give you a client_id and a secret.
Clients can be public or private, and are private by default. You can choose to make the client public when it is created.
A private client is used with applications which can securely store the client secret (that is, the secret returned to you when you first created the client). For example, an application running on a secured server that only the developer has access to would use a private OAuth client. This is also called a confidential client in some OAuth documentation.
A public client is used with applications where the client secret is not guaranteed to be secure. For example, a native app running on a user's computer may not be able to keep the client secret safe, as a user could potentially inspect the source of the application. So, native apps or apps that run in a user's browser should use a public client.
Public and private clients follow different workflows, as described below.
OAuth Workflow
The OAuth workflow is a series of exchanges between your third-party app and Linode. The workflow is used
to authenticate a user before an application can start making API calls on the user's behalf.
Notes:
With respect to the diagram in section 1.2 of RFC 6749, login.linode.com (referred to in this section as the login server)
is the Resource Owner and the Authorization Server; api.linode.com (referred to here as the api server) is the Resource Server.
The OAuth spec refers to the private and public workflows listed below as the authorization code flow and implicit flow.
| PRIVATE WORKFLOW | PUBLIC WORKFLOW |
|------------------|------------------|
| 1. The user visits the application's website and is directed to login with Linode. | 1. The user visits the application's website and is directed to login with Linode. |
| 2. Your application then redirects the user to Linode's login server with the client application's clientid and requested OAuth scope, which should appear in the URL of the login page. | 2. Your application then redirects the user to Linode's login server with the client application's clientid and requested OAuth scope, which should appear in the URL of the login page. |
| 3. The user logs into the login server with their username and password. | 3. The user logs into the login server with their username and password. |
| 4. The login server redirects the user to the specificed redirect URL with a temporary authorization code (exchange code) in the URL. | 4. The login server redirects the user back to your application with an OAuth accesstoken embedded in the redirect URL's hash. This is temporary and expires in two hours. No refreshtoken is issued. Therefore, once the access_token expires, a new one will need to be issued by having the user log in again. |
| 5. The application issues a POST request (see below) to the login server with the exchange code, clientid, and the client application's clientsecret. | |
| 6. The login server responds to the client application with a new OAuth accesstoken and refreshtoken. The access_token is set to expire in two hours. | |
| 7. The refreshtoken can be used by contacting the login server with the clientid, clientsecret, granttype, and refreshtoken to get a new OAuth accesstoken and refreshtoken. The new accesstoken is good for another two hours, and the new refresh_token, can be used to extend the session again by this same method. | |
OAuth Private Workflow - Additional Details
The following information expands on steps 5 through 7 of the private workflow:
Once the user has logged into Linode and you have received an exchange code,
you will need to trade that exchange code for an accesstoken and refreshtoken. You
do this by making an HTTP POST request to the following address:
Rate Limiting
With the Linode API, you can make up to 1,600 general API requests every two minutes per user as
determined by IP adddress or by OAuth token. Additionally, there are endpoint specfic limits defined below.
Note: There may be rate limiting applied at other levels outside of the API, for example, at the load balancer.
/stats endpoints have their own dedicated limits of 100 requests per minute per user.
These endpoints are:
View Linode Statistics
View Linode Statistics (year/month)
View NodeBalancer Statistics
List Managed Stats
Object Storage endpoints have a dedicated limit of 750 requests per second per user.
The Object Storage endpoints are:
Object Storage Endpoints
Opening Support Tickets has a dedicated limit of 2 requests per minute per user.
That endpoint is:
Open Support Ticket
Accepting Service Transfers has a dedicated limit of 2 requests per minute per user.
That endpoint is:
Service Transfer Accept
CLI (Command Line Interface)
The Linode CLI allows you to easily
work with the API using intuitive and simple syntax. It requires a
Personal Access Token
for authentication, and gives you access to all of the features and functionality
of the Linode API that are documented here with CLI examples.
Endpoints that do not have CLI examples are currently unavailable through the CLI, but
can be accessed via other methods such as Shell commands and other third-party applications.

SqlManagementClient

azure.com
The Azure SQL Database management API provides a RESTful set of web APIs that interact with Azure SQL Database services to manage your databases. The API enables users to create, retrieve, update, and delete databases, servers, and other entities.

Azure SQL Database Datamasking Policies and Rules

azure.com
Provides create, read, update and delete functionality for Azure SQL Database datamasking policies and rules.

Security Center

azure.com
API spec for Microsoft.Security (Azure Security Center) resource provider

SqlManagementClient

azure.com
The Azure SQL Database management API provides a RESTful set of web APIs that interact with Azure SQL Database services to manage your databases. The API enables users to create, retrieve, update, and delete databases, servers, and other entities.

PolicyClient

azure.com
To manage and control access to your resources, you can define customized policies and assign them at a scope.

Microsoft.ResourceHealth

azure.com
The Resource Health Client.