Mock sample for your project: AWS Audit Manager API

Integrate with "AWS Audit Manager API" from amazonaws.com in no time with Mockoon's ready to use mock sample

AWS Audit Manager

amazonaws.com

Version: 2017-07-25


Use this API in your project

Speed up your application development by using "AWS Audit Manager API" ready-to-use mock sample. Mocking this API will help you accelerate your development lifecycles and allow you to stop relying on an external API to get the job done. No more API keys to provision, accesses to configure or unplanned downtime, just work.
Enhance your development infrastructure by mocking third party APIs during integrating testing.

Description

Welcome to the Audit Manager API reference. This guide is for developers who need detailed information about the Audit Manager API operations, data types, and errors. Audit Manager is a service that provides automated evidence collection so that you can continuously audit your Amazon Web Services usage, and assess the effectiveness of your controls to better manage risk and simplify compliance. Audit Manager provides pre-built frameworks that structure and automate assessments for a given compliance standard. Frameworks include a pre-built collection of controls with descriptions and testing procedures, which are grouped according to the requirements of the specified compliance standard or regulation. You can also customize frameworks and controls to support internal audits with unique requirements. Use the following links to get started with the Audit Manager API: Actions : An alphabetical list of all Audit Manager API operations. Data types : An alphabetical list of all Audit Manager data types. Common parameters : Parameters that all Query operations can use. Common errors : Client and server errors that all operations can return. If you're new to Audit Manager, we recommend that you review the Audit Manager User Guide.

Other APIs by amazonaws.com

Amazon CloudDirectory

Amazon Cloud Directory Amazon Cloud Directory is a component of the AWS Directory Service that simplifies the development and management of cloud-scale web, mobile, and IoT applications. This guide describes the Cloud Directory operations that you can call programmatically and includes detailed information on data types and errors. For information about Cloud Directory features, see AWS Directory Service and the Amazon Cloud Directory Developer Guide.

Amazon CloudFront

Amazon CloudFront This is the Amazon CloudFront API Reference. This guide is for developers who need detailed information about CloudFront API actions, data types, and errors. For detailed information about CloudFront features, see the Amazon CloudFront Developer Guide.

AWS Certificate Manager Private Certificate Authority

This is the ACM Private CA API Reference. It provides descriptions, syntax, and usage examples for each of the actions and data types involved in creating and managing private certificate authorities (CA) for your organization. The documentation for each action shows the Query API request parameters and the XML response. Alternatively, you can use one of the AWS SDKs to access an API that's tailored to the programming language or platform that you're using. For more information, see AWS SDKs. Each ACM Private CA API operation has a quota that determines the number of times the operation can be called per second. ACM Private CA throttles API requests at different rates depending on the operation. Throttling means that ACM Private CA rejects an otherwise valid request because the request exceeds the operation's quota for the number of requests per second. When a request is throttled, ACM Private CA returns a ThrottlingException error. ACM Private CA does not guarantee a minimum request rate for APIs. To see an up-to-date list of your ACM Private CA quotas, or to request a quota increase, log into your AWS account and visit the Service Quotas console.

AWS IoT Events Data

AWS IoT Events monitors your equipment or device fleets for failures or changes in operation, and triggers actions when such events occur. You can use AWS IoT Events Data API commands to send inputs to detectors, list detectors, and view or update a detector's status. For more information, see What is AWS IoT Events? in the AWS IoT Events Developer Guide.

Application Auto Scaling

With Application Auto Scaling, you can configure automatic scaling for the following resources: Amazon AppStream 2.0 fleets Amazon Aurora Replicas Amazon Comprehend document classification and entity recognizer endpoints Amazon DynamoDB tables and global secondary indexes throughput capacity Amazon ECS services Amazon ElastiCache for Redis clusters (replication groups) Amazon EMR clusters Amazon Keyspaces (for Apache Cassandra) tables Lambda function provisioned concurrency Amazon Managed Streaming for Apache Kafka broker storage Amazon SageMaker endpoint variants Spot Fleet (Amazon EC2) requests Custom resources provided by your own applications or services API Summary The Application Auto Scaling service API includes three key sets of actions: Register and manage scalable targets - Register Amazon Web Services or custom resources as scalable targets (a resource that Application Auto Scaling can scale), set minimum and maximum capacity limits, and retrieve information on existing scalable targets. Configure and manage automatic scaling - Define scaling policies to dynamically scale your resources in response to CloudWatch alarms, schedule one-time or recurring scaling actions, and retrieve your recent scaling activity history. Suspend and resume scaling - Temporarily suspend and later resume automatic scaling by calling the RegisterScalableTarget API action for any Application Auto Scaling scalable target. You can suspend and resume (individually or in combination) scale-out activities that are triggered by a scaling policy, scale-in activities that are triggered by a scaling policy, and scheduled scaling. To learn more about Application Auto Scaling, including information about granting IAM users required permissions for Application Auto Scaling actions, see the Application Auto Scaling User Guide.

AWS Auto Scaling Plans

AWS Auto Scaling Use AWS Auto Scaling to create scaling plans for your applications to automatically scale your scalable AWS resources. API Summary You can use the AWS Auto Scaling service API to accomplish the following tasks: Create and manage scaling plans Define target tracking scaling policies to dynamically scale your resources based on utilization Scale Amazon EC2 Auto Scaling groups using predictive scaling and dynamic scaling to scale your Amazon EC2 capacity faster Set minimum and maximum capacity limits Retrieve information on existing scaling plans Access current forecast data and historical forecast data for up to 56 days previous To learn more about AWS Auto Scaling, including information about granting IAM users required permissions for AWS Auto Scaling actions, see the AWS Auto Scaling User Guide.
The Amazon Braket API Reference provides information about the operations and structures supported in Amazon Braket.

AWS IoT Analytics

IoT Analytics allows you to collect large amounts of device data, process messages, and store them. You can then query the data and run sophisticated analytics on it. IoT Analytics enables advanced data exploration through integration with Jupyter Notebooks and data visualization through integration with Amazon QuickSight. Traditional analytics and business intelligence tools are designed to process structured data. IoT data often comes from devices that record noisy processes (such as temperature, motion, or sound). As a result the data from these devices can have significant gaps, corrupted messages, and false readings that must be cleaned up before analysis can occur. Also, IoT data is often only meaningful in the context of other data from external sources. IoT Analytics automates the steps required to analyze data from IoT devices. IoT Analytics filters, transforms, and enriches IoT data before storing it in a time-series data store for analysis. You can set up the service to collect only the data you need from your devices, apply mathematical transforms to process the data, and enrich the data with device-specific metadata such as device type and location before storing it. Then, you can analyze your data by running queries using the built-in SQL query engine, or perform more complex analytics and machine learning inference. IoT Analytics includes pre-built models for common IoT use cases so you can answer questions like which devices are about to fail or which customers are at risk of abandoning their wearable devices.

AWS Amplify

Amplify enables developers to develop and deploy cloud-powered mobile and web apps. The Amplify Console provides a continuous delivery and hosting service for web applications. For more information, see the Amplify Console User Guide. The Amplify Framework is a comprehensive set of SDKs, libraries, tools, and documentation for client app development. For more information, see the Amplify Framework.

AWS Certificate Manager

Amazon Web Services Certificate Manager You can use Amazon Web Services Certificate Manager (ACM) to manage SSL/TLS certificates for your Amazon Web Services-based websites and applications. For more information about using ACM, see the Amazon Web Services Certificate Manager User Guide.

AWS CodePipeline

AWS CodePipeline Overview This is the AWS CodePipeline API Reference. This guide provides descriptions of the actions and data types for AWS CodePipeline. Some functionality for your pipeline can only be configured through the API. For more information, see the AWS CodePipeline User Guide. You can use the AWS CodePipeline API to work with pipelines, stages, actions, and transitions. Pipelines are models of automated release processes. Each pipeline is uniquely named, and consists of stages, actions, and transitions. You can work with pipelines by calling: CreatePipeline, which creates a uniquely named pipeline. DeletePipeline, which deletes the specified pipeline. GetPipeline, which returns information about the pipeline structure and pipeline metadata, including the pipeline Amazon Resource Name (ARN). GetPipelineExecution, which returns information about a specific execution of a pipeline. GetPipelineState, which returns information about the current state of the stages and actions of a pipeline. ListActionExecutions, which returns action-level details for past executions. The details include full stage and action-level details, including individual action duration, status, any errors that occurred during the execution, and input and output artifact location details. ListPipelines, which gets a summary of all of the pipelines associated with your account. ListPipelineExecutions, which gets a summary of the most recent executions for a pipeline. StartPipelineExecution, which runs the most recent revision of an artifact through the pipeline. StopPipelineExecution, which stops the specified pipeline execution from continuing through the pipeline. UpdatePipeline, which updates a pipeline with edits or changes to the structure of the pipeline. Pipelines include stages. Each stage contains one or more actions that must complete before the next stage begins. A stage results in success or failure. If a stage fails, the pipeline stops at that stage and remains stopped until either a new version of an artifact appears in the source location, or a user takes action to rerun the most recent artifact through the pipeline. You can call GetPipelineState, which displays the status of a pipeline, including the status of stages in the pipeline, or GetPipeline, which returns the entire structure of the pipeline, including the stages of that pipeline. For more information about the structure of stages and actions, see AWS CodePipeline Pipeline Structure Reference. Pipeline stages include actions that are categorized into categories such as source or build actions performed in a stage of a pipeline. For example, you can use a source action to import artifacts into a pipeline from a source such as Amazon S3. Like stages, you do not work with actions directly in most cases, but you do define and interact with actions when working with pipeline operations such as CreatePipeline and GetPipelineState. Valid action categories are: Source Build Test Deploy Approval Invoke Pipelines also include transitions, which allow the transition of artifacts from one stage to the next in a pipeline after the actions in one stage complete. You can work with transitions by calling: DisableStageTransition, which prevents artifacts from transitioning to the next stage in a pipeline. EnableStageTransition, which enables transition of artifacts between stages in a pipeline. Using the API to integrate with AWS CodePipeline For third-party integrators or developers who want to create their own integrations with AWS CodePipeline, the expected sequence varies from the standard API user. To integrate with AWS CodePipeline, developers need to work with the following items: Jobs, which are instances of an action. For example, a job for a source action might import a revision of an artifact from a source. You can work with jobs by calling: AcknowledgeJob, which confirms whether a job worker has received the specified job. GetJobDetails, which returns the details of a job. PollForJobs, which determines whether there are any jobs to act on. PutJobFailureResult, which provides details of a job failure. PutJobSuccessResult, which provides details of a job success. Third party jobs, which are instances of an action created by a partner action and integrated into AWS CodePipeline. Partner actions are created by members of the AWS Partner Network. You can work with third party jobs by calling: AcknowledgeThirdPartyJob, which confirms whether a job worker has received the specified job. GetThirdPartyJobDetails, which requests the details of a job for a partner action. PollForThirdPartyJobs, which determines whether there are any jobs to act on. PutThirdPartyJobFailureResult, which provides details of a job failure. PutThirdPartyJobSuccessResult, which provides details of a job success.

Amazon Elastic Block Store

You can use the Amazon Elastic Block Store (Amazon EBS) direct APIs to create Amazon EBS snapshots, write data directly to your snapshots, read data on your snapshots, and identify the differences or changes between two snapshots. If you’re an independent software vendor (ISV) who offers backup services for Amazon EBS, the EBS direct APIs make it more efficient and cost-effective to track incremental changes on your Amazon EBS volumes through snapshots. This can be done without having to create new volumes from snapshots, and then use Amazon Elastic Compute Cloud (Amazon EC2) instances to compare the differences. You can create incremental snapshots directly from data on-premises into volumes and the cloud to use for quick disaster recovery. With the ability to write and read snapshots, you can write your on-premises data to an snapshot during a disaster. Then after recovery, you can restore it back to Amazon Web Services or on-premises from the snapshot. You no longer need to build and maintain complex mechanisms to copy data to and from Amazon EBS. This API reference provides detailed information about the actions, data types, parameters, and errors of the EBS direct APIs. For more information about the elements that make up the EBS direct APIs, and examples of how to use them effectively, see Accessing the Contents of an Amazon EBS Snapshot in the Amazon Elastic Compute Cloud User Guide. For more information about the supported Amazon Web Services Regions, endpoints, and service quotas for the EBS direct APIs, see Amazon Elastic Block Store Endpoints and Quotas in the Amazon Web Services General Reference.

Other APIs in the same category

ApiManagementClient

azure.com
Use these REST APIs for performing operations on Email Templates associated with your Azure API Management deployment.

Amazon EMR Containers

Amazon EMR on EKS provides a deployment option for Amazon EMR that allows you to run open-source big data frameworks on Amazon Elastic Kubernetes Service (Amazon EKS). With this deployment option, you can focus on running analytics workloads while Amazon EMR on EKS builds, configures, and manages containers for open-source applications. For more information about Amazon EMR on EKS concepts and tasks, see What is Amazon EMR on EKS. Amazon EMR containers is the API name for Amazon EMR on EKS. The emr-containers prefix is used in the following scenarios: It is the prefix in the CLI commands for Amazon EMR on EKS. For example, aws emr-containers start-job-run. It is the prefix before IAM policy actions for Amazon EMR on EKS. For example,"Action": [ "emr-containers:StartJobRun"]. For more information, see Policy actions for Amazon EMR on EKS. It is the prefix used in Amazon EMR on EKS service endpoints. For example, emr-containers.us-east-2.amazonaws.com. For more information, see Amazon EMR on EKS Service Endpoints.

AWS RoboMaker

This section provides documentation for the AWS RoboMaker API operations.

Amazon Elastic Kubernetes Service

Amazon Elastic Kubernetes Service (Amazon EKS) is a managed service that makes it easy for you to run Kubernetes on Amazon Web Services without needing to stand up or maintain your own Kubernetes control plane. Kubernetes is an open-source system for automating the deployment, scaling, and management of containerized applications. Amazon EKS runs up-to-date versions of the open-source Kubernetes software, so you can use all the existing plugins and tooling from the Kubernetes community. Applications running on Amazon EKS are fully compatible with applications running on any standard Kubernetes environment, whether running in on-premises data centers or public clouds. This means that you can easily migrate any standard Kubernetes application to Amazon EKS without any code modification required.

AWS Fault Injection Simulator

AWS Fault Injection Simulator is a managed service that enables you to perform fault injection experiments on your AWS workloads. For more information, see the AWS Fault Injection Simulator User Guide.

AWS IoT Greengrass V2

IoT Greengrass brings local compute, messaging, data management, sync, and ML inference capabilities to edge devices. This enables devices to collect and analyze data closer to the source of information, react autonomously to local events, and communicate securely with each other on local networks. Local devices can also communicate securely with Amazon Web Services IoT Core and export IoT data to the Amazon Web Services Cloud. IoT Greengrass developers can use Lambda functions and components to create and deploy applications to fleets of edge devices for local operation. IoT Greengrass Version 2 provides a new major version of the IoT Greengrass Core software, new APIs, and a new console. Use this API reference to learn how to use the IoT Greengrass V2 API operations to manage components, manage deployments, and core devices. For more information, see What is IoT Greengrass? in the IoT Greengrass V2 Developer Guide.

AWS Database Migration Service

Database Migration Service Database Migration Service (DMS) can migrate your data to and from the most widely used commercial and open-source databases such as Oracle, PostgreSQL, Microsoft SQL Server, Amazon Redshift, MariaDB, Amazon Aurora, MySQL, and SAP Adaptive Server Enterprise (ASE). The service supports homogeneous migrations such as Oracle to Oracle, as well as heterogeneous migrations between different database platforms, such as Oracle to MySQL or SQL Server to PostgreSQL. For more information about DMS, see What Is Database Migration Service? in the Database Migration Service User Guide.

CodeArtifact

AWS CodeArtifact is a fully managed artifact repository compatible with language-native package managers and build tools such as npm, Apache Maven, and pip. You can use CodeArtifact to share packages with development teams and pull packages. Packages can be pulled from both public and CodeArtifact repositories. You can also create an upstream relationship between a CodeArtifact repository and another repository, which effectively merges their contents from the point of view of a package manager client. AWS CodeArtifact Components Use the information in this guide to help you work with the following CodeArtifact components: Repository : A CodeArtifact repository contains a set of package versions, each of which maps to a set of assets, or files. Repositories are polyglot, so a single repository can contain packages of any supported type. Each repository exposes endpoints for fetching and publishing packages using tools like the npm CLI, the Maven CLI ( mvn ), and pip . Domain : Repositories are aggregated into a higher-level entity known as a domain. All package assets and metadata are stored in the domain, but are consumed through repositories. A given package asset, such as a Maven JAR file, is stored once per domain, no matter how many repositories it's present in. All of the assets and metadata in a domain are encrypted with the same customer master key (CMK) stored in AWS Key Management Service (AWS KMS). Each repository is a member of a single domain and can't be moved to a different domain. The domain allows organizational policy to be applied across multiple repositories, such as which accounts can access repositories in the domain, and which public repositories can be used as sources of packages. Although an organization can have multiple domains, we recommend a single production domain that contains all published artifacts so that teams can find and share packages across their organization. Package : A package is a bundle of software and the metadata required to resolve dependencies and install the software. CodeArtifact supports npm, PyPI, and Maven package formats. In CodeArtifact, a package consists of: A name (for example, webpack is the name of a popular npm package) An optional namespace (for example, @types in @types/node) A set of versions (for example, 1.0.0, 1.0.1, 1.0.2, etc.) Package-level metadata (for example, npm tags) Package version : A version of a package, such as @types/node 12.6.9. The version number format and semantics vary for different package formats. For example, npm package versions must conform to the Semantic Versioning specification. In CodeArtifact, a package version consists of the version identifier, metadata at the package version level, and a set of assets. Upstream repository : One repository is upstream of another when the package versions in it can be accessed from the repository endpoint of the downstream repository, effectively merging the contents of the two repositories from the point of view of a client. CodeArtifact allows creating an upstream relationship between two repositories. Asset : An individual file stored in CodeArtifact associated with a package version, such as an npm.tgz file or Maven POM and JAR files. CodeArtifact supports these operations: AssociateExternalConnection : Adds an existing external connection to a repository. CopyPackageVersions : Copies package versions from one repository to another repository in the same domain. CreateDomain : Creates a domain CreateRepository : Creates a CodeArtifact repository in a domain. DeleteDomain : Deletes a domain. You cannot delete a domain that contains repositories. DeleteDomainPermissionsPolicy : Deletes the resource policy that is set on a domain. DeletePackageVersions : Deletes versions of a package. After a package has been deleted, it can be republished, but its assets and metadata cannot be restored because they have been permanently removed from storage. DeleteRepository : Deletes a repository. DeleteRepositoryPermissionsPolicy : Deletes the resource policy that is set on a repository. DescribeDomain : Returns a DomainDescription object that contains information about the requested domain. DescribePackageVersion : Returns a PackageVersionDescription object that contains details about a package version. DescribeRepository : Returns a RepositoryDescription object that contains detailed information about the requested repository. DisposePackageVersions : Disposes versions of a package. A package version with the status Disposed cannot be restored because they have been permanently removed from storage. DisassociateExternalConnection : Removes an existing external connection from a repository. GetAuthorizationToken : Generates a temporary authorization token for accessing repositories in the domain. The token expires the authorization period has passed. The default authorization period is 12 hours and can be customized to any length with a maximum of 12 hours. GetDomainPermissionsPolicy : Returns the policy of a resource that is attached to the specified domain. GetPackageVersionAsset : Returns the contents of an asset that is in a package version. GetPackageVersionReadme : Gets the readme file or descriptive text for a package version. GetRepositoryEndpoint : Returns the endpoint of a repository for a specific package format. A repository has one endpoint for each package format: npm pypi maven GetRepositoryPermissionsPolicy : Returns the resource policy that is set on a repository. ListDomains : Returns a list of DomainSummary objects. Each returned DomainSummary object contains information about a domain. ListPackages : Lists the packages in a repository. ListPackageVersionAssets : Lists the assets for a given package version. ListPackageVersionDependencies : Returns a list of the direct dependencies for a package version. ListPackageVersions : Returns a list of package versions for a specified package in a repository. ListRepositories : Returns a list of repositories owned by the AWS account that called this method. ListRepositoriesInDomain : Returns a list of the repositories in a domain. PutDomainPermissionsPolicy : Attaches a resource policy to a domain. PutRepositoryPermissionsPolicy : Sets the resource policy on a repository that specifies permissions to access it. UpdatePackageVersionsStatus : Updates the status of one or more versions of a package. UpdateRepository : Updates the properties of a repository.

InfrastructureInsightsManagementClient

azure.com
Region health operation endpoints and objects.

AWS CodeCommit

AWS CodeCommit This is the AWS CodeCommit API Reference. This reference provides descriptions of the operations and data types for AWS CodeCommit API along with usage examples. You can use the AWS CodeCommit API to work with the following objects: Repositories, by calling the following: BatchGetRepositories, which returns information about one or more repositories associated with your AWS account. CreateRepository, which creates an AWS CodeCommit repository. DeleteRepository, which deletes an AWS CodeCommit repository. GetRepository, which returns information about a specified repository. ListRepositories, which lists all AWS CodeCommit repositories associated with your AWS account. UpdateRepositoryDescription, which sets or updates the description of the repository. UpdateRepositoryName, which changes the name of the repository. If you change the name of a repository, no other users of that repository can access it until you send them the new HTTPS or SSH URL to use. Branches, by calling the following: CreateBranch, which creates a branch in a specified repository. DeleteBranch, which deletes the specified branch in a repository unless it is the default branch. GetBranch, which returns information about a specified branch. ListBranches, which lists all branches for a specified repository. UpdateDefaultBranch, which changes the default branch for a repository. Files, by calling the following: DeleteFile, which deletes the content of a specified file from a specified branch. GetBlob, which returns the base-64 encoded content of an individual Git blob object in a repository. GetFile, which returns the base-64 encoded content of a specified file. GetFolder, which returns the contents of a specified folder or directory. PutFile, which adds or modifies a single file in a specified repository and branch. Commits, by calling the following: BatchGetCommits, which returns information about one or more commits in a repository. CreateCommit, which creates a commit for changes to a repository. GetCommit, which returns information about a commit, including commit messages and author and committer information. GetDifferences, which returns information about the differences in a valid commit specifier (such as a branch, tag, HEAD, commit ID, or other fully qualified reference). Merges, by calling the following: BatchDescribeMergeConflicts, which returns information about conflicts in a merge between commits in a repository. CreateUnreferencedMergeCommit, which creates an unreferenced commit between two branches or commits for the purpose of comparing them and identifying any potential conflicts. DescribeMergeConflicts, which returns information about merge conflicts between the base, source, and destination versions of a file in a potential merge. GetMergeCommit, which returns information about the merge between a source and destination commit. GetMergeConflicts, which returns information about merge conflicts between the source and destination branch in a pull request. GetMergeOptions, which returns information about the available merge options between two branches or commit specifiers. MergeBranchesByFastForward, which merges two branches using the fast-forward merge option. MergeBranchesBySquash, which merges two branches using the squash merge option. MergeBranchesByThreeWay, which merges two branches using the three-way merge option. Pull requests, by calling the following: CreatePullRequest, which creates a pull request in a specified repository. CreatePullRequestApprovalRule, which creates an approval rule for a specified pull request. DeletePullRequestApprovalRule, which deletes an approval rule for a specified pull request. DescribePullRequestEvents, which returns information about one or more pull request events. EvaluatePullRequestApprovalRules, which evaluates whether a pull request has met all the conditions specified in its associated approval rules. GetCommentsForPullRequest, which returns information about comments on a specified pull request. GetPullRequest, which returns information about a specified pull request. GetPullRequestApprovalStates, which returns information about the approval states for a specified pull request. GetPullRequestOverrideState, which returns information about whether approval rules have been set aside (overriden) for a pull request, and if so, the Amazon Resource Name (ARN) of the user or identity that overrode the rules and their requirements for the pull request. ListPullRequests, which lists all pull requests for a repository. MergePullRequestByFastForward, which merges the source destination branch of a pull request into the specified destination branch for that pull request using the fast-forward merge option. MergePullRequestBySquash, which merges the source destination branch of a pull request into the specified destination branch for that pull request using the squash merge option. MergePullRequestByThreeWay. which merges the source destination branch of a pull request into the specified destination branch for that pull request using the three-way merge option. OverridePullRequestApprovalRules, which sets aside all approval rule requirements for a pull request. PostCommentForPullRequest, which posts a comment to a pull request at the specified line, file, or request. UpdatePullRequestApprovalRuleContent, which updates the structure of an approval rule for a pull request. UpdatePullRequestApprovalState, which updates the state of an approval on a pull request. UpdatePullRequestDescription, which updates the description of a pull request. UpdatePullRequestStatus, which updates the status of a pull request. UpdatePullRequestTitle, which updates the title of a pull request. Approval rule templates, by calling the following: AssociateApprovalRuleTemplateWithRepository, which associates a template with a specified repository. After the template is associated with a repository, AWS CodeCommit creates approval rules that match the template conditions on every pull request created in the specified repository. BatchAssociateApprovalRuleTemplateWithRepositories, which associates a template with one or more specified repositories. After the template is associated with a repository, AWS CodeCommit creates approval rules that match the template conditions on every pull request created in the specified repositories. BatchDisassociateApprovalRuleTemplateFromRepositories, which removes the association between a template and specified repositories so that approval rules based on the template are not automatically created when pull requests are created in those repositories. CreateApprovalRuleTemplate, which creates a template for approval rules that can then be associated with one or more repositories in your AWS account. DeleteApprovalRuleTemplate, which deletes the specified template. It does not remove approval rules on pull requests already created with the template. DisassociateApprovalRuleTemplateFromRepository, which removes the association between a template and a repository so that approval rules based on the template are not automatically created when pull requests are created in the specified repository. GetApprovalRuleTemplate, which returns information about an approval rule template. ListApprovalRuleTemplates, which lists all approval rule templates in the AWS Region in your AWS account. ListAssociatedApprovalRuleTemplatesForRepository, which lists all approval rule templates that are associated with a specified repository. ListRepositoriesForApprovalRuleTemplate, which lists all repositories associated with the specified approval rule template. UpdateApprovalRuleTemplateDescription, which updates the description of an approval rule template. UpdateApprovalRuleTemplateName, which updates the name of an approval rule template. UpdateApprovalRuleTemplateContent, which updates the content of an approval rule template. Comments in a repository, by calling the following: DeleteCommentContent, which deletes the content of a comment on a commit in a repository. GetComment, which returns information about a comment on a commit. GetCommentReactions, which returns information about emoji reactions to comments. GetCommentsForComparedCommit, which returns information about comments on the comparison between two commit specifiers in a repository. PostCommentForComparedCommit, which creates a comment on the comparison between two commit specifiers in a repository. PostCommentReply, which creates a reply to a comment. PutCommentReaction, which creates or updates an emoji reaction to a comment. UpdateComment, which updates the content of a comment on a commit in a repository. Tags used to tag resources in AWS CodeCommit (not Git tags), by calling the following: ListTagsForResource, which gets information about AWS tags for a specified Amazon Resource Name (ARN) in AWS CodeCommit. TagResource, which adds or updates tags for a resource in AWS CodeCommit. UntagResource, which removes tags for a resource in AWS CodeCommit. Triggers, by calling the following: GetRepositoryTriggers, which returns information about triggers configured for a repository. PutRepositoryTriggers, which replaces all triggers for a repository and can be used to create or delete triggers. TestRepositoryTriggers, which tests the functionality of a repository trigger by sending data to the trigger target. For information about how to use AWS CodeCommit, see the AWS CodeCommit User Guide.

Local Search Client

microsoft.com
The Local Search client lets you send a search query to Bing and get back search results that include local businesses such as restaurants, hotels, retail stores, or other local businesses. The query can specify the name of the local business or it can ask for a list (for example, restaurants near me).

AWS Transfer Family

Amazon Web Services Transfer Family is a fully managed service that enables the transfer of files over the File Transfer Protocol (FTP), File Transfer Protocol over SSL (FTPS), or Secure Shell (SSH) File Transfer Protocol (SFTP) directly into and out of Amazon Simple Storage Service (Amazon S3). Amazon Web Services helps you seamlessly migrate your file transfer workflows to Amazon Web Services Transfer Family by integrating with existing authentication systems, and providing DNS routing with Amazon Route 53 so nothing changes for your customers and partners, or their applications. With your data in Amazon S3, you can use it with Amazon Web Services services for processing, analytics, machine learning, and archiving. Getting started with Amazon Web Services Transfer Family is easy since there is no infrastructure to buy and set up.