text
stringlengths 0
20.3k
|
---|
URL: https://cloud.google.com/architecture/distributed-load-testing-using-gke |
Date Scraped: 2025-02-23T11:48:21.515Z |
Content: |
Home Docs Cloud Architecture Center Send feedback Distributed load testing using Google Kubernetes Engine Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-08-13 UTC This document explains how to use Google Kubernetes Engine (GKE) to deploy a distributed load testing framework that uses multiple containers to create traffic for a simple REST-based API. This document load-tests a web application deployed to App Engine that exposes REST-style endpoints to respond to incoming HTTP POST requests. You can use this same pattern to create load testing frameworks for a variety of scenarios and applications, such as messaging systems, data stream management systems, and database systems. Objectives Define environment variables to control deployment configuration. Create a GKE cluster. Perform load testing. Optionally scale up the number of users or extend the pattern to other use cases. Costs In this document, you use the following billable components of Google Cloud: App Engine Artifact Registry Cloud Build Cloud Storage Google Kubernetes Engine To generate a cost estimate based on your projected usage, use the pricing calculator. New Google Cloud users might be eligible for a free trial. Before you begin Start by creating a Google Cloud account. With this account, you get $300 in free credits, plus free usage of over 20 products, up to monthly limits. Create an account In the Google Cloud console, on the project selector page, select or create a Google Cloud project. Note: If you don't plan to keep the resources that you create in this procedure, create a project instead of selecting an existing project. After you finish these steps, you can delete the project, removing all resources associated with the project. Go to project selector Make sure that billing is enabled for your Google Cloud project. Enable the App Engine, Artifact Registry, Cloud Build, Compute Engine, Resource Manager, Google Kubernetes Engine, and Identity and Access Management APIs. Enable the APIs In the Google Cloud console, on the project selector page, select or create a Google Cloud project. Note: If you don't plan to keep the resources that you create in this procedure, create a project instead of selecting an existing project. After you finish these steps, you can delete the project, removing all resources associated with the project. Go to project selector Make sure that billing is enabled for your Google Cloud project. Enable the App Engine, Artifact Registry, Cloud Build, Compute Engine, Resource Manager, Google Kubernetes Engine, and Identity and Access Management APIs. Enable the APIs When you finish the tasks that are described in this document, you can avoid continued billing by deleting the resources that you created. For more information, see Clean up. Grant roles to your user account. Run the following command once for each of the following IAM roles: roles/serviceusage.serviceUsageAdmin, roles/container.admin, roles/appengine.appAdmin, roles/appengine.appCreator, roles/artifactregistry.admin, roles/resourcemanager.projectIamAdmin, roles/compute.instanceAdmin.v1, roles/iam.serviceAccountUser, roles/cloudbuild.builds.builder, roles/iam.serviceAccountAdmin gcloud projects add-iam-policy-binding PROJECT_ID --member="user:USER_IDENTIFIER" --role=ROLE Replace PROJECT_ID with your project ID. Replace USER_IDENTIFIER with the identifier for your user account. For example, user:[email protected]. Replace ROLE with each individual role. Example workload The following diagram shows an example workload where requests go from client to application. To model this interaction, you can use Locust, a distributed, Python-based load testing tool that can distribute requests across multiple target paths. For example, Locust can distribute requests to the /login and /metrics target paths. The workload is modeled as a set of tasks in Locust. Architecture This architecture involves two main components: The Locust Docker container image. The container orchestration and management mechanism. The Locust Docker container image contains the Locust software. The Dockerfile, which you get when you clone the GitHub repository that accompanies this document, uses a base Python image and includes scripts to start the Locust service and execute the tasks. To approximate real-world clients, each Locust task is weighted. For example, registration happens once per thousand total client requests. GKE provides container orchestration and management. With GKE, you can specify the number of container nodes that provide the foundation for your load testing framework. You can also organize your load testing workers into Pods, and specify how many Pods you want GKE to keep running. To deploy the load testing tasks, you do the following: Deploy a load testing primary, which is referred to as a master by Locust. Deploy a group of load testing workers. With these load testing workers, you can create a substantial amount of traffic for testing purposes. The following diagram shows the architecture that demonstrates load testing using a sample application. The master Pod serves the web interface used to operate and monitor load testing. The worker Pods generate the REST request traffic for the application undergoing test, and send metrics to the master. Note: Generating excessive amounts of traffic to external systems can resemble a denial-of-service attack. Be sure to review the Google Cloud Terms of Service and the Google Cloud Acceptable Use Policy. About the load testing master The Locust master is the entry point for executing the load testing tasks. The Locust master configuration specifies several elements, including the default ports used by the container: 8089 for the web interface 5557 and 5558 for communicating with workers This information is later used to configure the Locust workers. You deploy a Service to ensure that the necessary ports are accessible to other Pods within the cluster through hostname:port. These ports are also referenceable through a descriptive port name. This Service allows the Locust workers to easily discover and reliably communicate with the master, even if the master fails and is replaced with a new Pod by the Deployment. A second Service is deployed with the necessary annotation to create an internal passthrough Network Load Balancer that makes the Locust web application Service accessible to clients outside of your cluster that use the same VPC network and are located in the same Google Cloud region as your cluster. After you deploy the Locust master, you can open the web interface using the internal IP address provisioned by the internal passthrough Network Load Balancer. After you deploy the Locust workers, you can start the simulation and look at aggregate statistics through the Locust web interface. About the load testing workers The Locust workers execute the load testing tasks. You use a single Deployment to create multiple Pods. The Pods are spread out across the Kubernetes cluster. Each Pod uses environment variables to control configuration information, such as the hostname of the system under test and the hostname of the Locust master. The following diagram shows the relationship between the Locust master and the Locust workers. Initialize common variables You must define several variables that control where elements of the infrastructure are deployed. Open Cloud Shell: Open Cloud Shell You run all the terminal commands in this document from Cloud Shell. Set the environment variables that require customization: export GKE_CLUSTER=GKE_CLUSTER export AR_REPO=AR_REPO export REGION=REGION export ZONE=ZONE export SAMPLE_APP_LOCATION=SAMPLE_APP_LOCATION Replace the following: GKE_CLUSTER: the name of your GKE cluster. AR_REPO: the name of your Artifact Registry repository REGION: the region where your GKE cluster and Artifact Registry repository will be created ZONE: the zone in your region where your Compute Engine instance will be created SAMPLE_APP_LOCATION: the (regional) location where your sample App Engine application will be deployed The commands should look similar to the following example: export GKE_CLUSTER=gke-lt-cluster export AR_REPO=dist-lt-repo export REGION=us-central1 export ZONE=us-central1-b export SAMPLE_APP_LOCATION=us-central Set the following additional environment variables: export GKE_NODE_TYPE=e2-standard-4 export GKE_SCOPE="https://www.googleapis.com/auth/cloud-platform" export PROJECT=$(gcloud config get-value project) export SAMPLE_APP_TARGET=${PROJECT}.appspot.com Set the default zone so you don't have to specify these values in subsequent commands: gcloud config set compute/zone ${ZONE} Create a GKE cluster Create a service account with the minimum permissions required by the cluster: gcloud iam service-accounts create dist-lt-svc-acc gcloud projects add-iam-policy-binding ${PROJECT} --member=serviceAccount:dist-lt-svc-acc@${PROJECT}.iam.gserviceaccount.com --role=roles/artifactregistry.reader gcloud projects add-iam-policy-binding ${PROJECT} --member=serviceAccount:dist-lt-svc-acc@${PROJECT}.iam.gserviceaccount.com --role=roles/container.nodeServiceAccount Create the GKE cluster: gcloud container clusters create ${GKE_CLUSTER} \ --service-account=dist-lt-svc-acc@${PROJECT}.iam.gserviceaccount.com \ --region ${REGION} \ --machine-type ${GKE_NODE_TYPE} \ --enable-autoscaling \ --num-nodes 3 \ --min-nodes 3 \ --max-nodes 10 \ --scopes "${GKE_SCOPE}" Connect to the GKE cluster: gcloud container clusters get-credentials ${GKE_CLUSTER} \ --region ${REGION} \ --project ${PROJECT} Set up the environment Clone the sample repository from GitHub: git clone https://github.com/GoogleCloudPlatform/distributed-load-testing-using-kubernetes Change your working directory to the cloned repository: cd distributed-load-testing-using-kubernetes Build the container image Create an Artifact Registry repository: gcloud artifacts repositories create ${AR_REPO} \ --repository-format=docker \ --location=${REGION} \ --description="Distributed load testing with GKE and Locust" Build the container image and store it in your Artifact Registry repository: export LOCUST_IMAGE_NAME=locust-tasks export LOCUST_IMAGE_TAG=latest gcloud builds submit \ --tag ${REGION}-docker.pkg.dev/${PROJECT}/${AR_REPO}/${LOCUST_IMAGE_NAME}:${LOCUST_IMAGE_TAG} \ docker-image The accompanying Locust Docker image embeds a test task that calls the /login and /metrics endpoints in the sample application. In this example test task set, the respective ratio of requests submitted to these two endpoints will be 1 to 999. class MetricsTaskSet(TaskSet): _deviceid = None def on_start(self): self._deviceid = str(uuid.uuid4()) @task(1) def login(self): self.client.post( '/login', {"deviceid": self._deviceid}) @task(999) def post_metrics(self): self.client.post( "/metrics", {"deviceid": self._deviceid, "timestamp": datetime.now()}) class MetricsLocust(FastHttpUser): tasks = {MetricsTaskSet} Verify that the Docker image is in your Artifact Registry repository: gcloud artifacts docker images list ${REGION}-docker.pkg.dev/${PROJECT}/${AR_REPO} | \ grep ${LOCUST_IMAGE_NAME} The output is similar to the following: Listing items under project PROJECT, location REGION, repository AR_REPO REGION-docker.pkg.dev/PROJECT/AR_REPO/locust-tasks sha256:796d4be067eae7c82d41824791289045789182958913e57c0ef40e8d5ddcf283 2022-04-13T01:55:02 2022-04-13T01:55:02 Deploy the sample application Create and deploy the sample-webapp as App Engine: gcloud app create --region=${SAMPLE_APP_LOCATION} gcloud app deploy sample-webapp/app.yaml \ --project=${PROJECT} When prompted, type y to proceed with deployment. The output is similar to the following: File upload done. Updating service [default]...done. Setting traffic split for service [default]...done. Deployed service [default] to [https://PROJECT.appspot.com] The sample App Engine application implements /login and /metrics endpoints: @app.route('/login', methods=['GET', 'POST']) def login(): deviceid = request.values.get('deviceid') return '/login - device: {}\n'.format(deviceid) @app.route('/metrics', methods=['GET', 'POST']) def metrics(): deviceid = request.values.get('deviceid') timestamp = request.values.get('timestamp') return '/metrics - device: {}, timestamp: {}\n'.format(deviceid, timestamp) Deploy the Locust master and worker Pods Substitute the environment variable values for target host, project, and image parameters in the locust-master-controller.yaml and locust-worker-controller.yaml files, and create the Locust master and worker Deployments: envsubst < kubernetes-config/locust-master-controller.yaml.tpl | kubectl apply -f - envsubst < kubernetes-config/locust-worker-controller.yaml.tpl | kubectl apply -f - envsubst < kubernetes-config/locust-master-service.yaml.tpl | kubectl apply -f - Verify the Locust Deployments: kubectl get pods -o wide The output looks something like the following: NAME READY STATUS RESTARTS AGE IP NODE locust-master-87f8ffd56-pxmsk 1/1 Running 0 1m 10.32.2.6 gke-gke-load-test-default-pool-96a3f394 locust-worker-58879b475c-279q9 1/1 Running 0 1m 10.32.1.5 gke-gke-load-test-default-pool-96a3f394 locust-worker-58879b475c-9frbw 1/1 Running 0 1m 10.32.2.8 gke-gke-load-test-default-pool-96a3f394 locust-worker-58879b475c-dppmz 1/1 Running 0 1m 10.32.2.7 gke-gke-load-test-default-pool-96a3f394 locust-worker-58879b475c-g8tzf 1/1 Running 0 1m 10.32.0.11 gke-gke-load-test-default-pool-96a3f394 locust-worker-58879b475c-qcscq 1/1 Running 0 1m 10.32.1.4 gke-gke-load-test-default-pool-96a3f394 Verify the Services: kubectl get services The output looks something like the following: NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.87.240.1 <none> 443/TCP 12m locust-master ClusterIP 10.87.245.22 <none> 5557/TCP,5558/TCP 1m locust-master-web LoadBalancer 10.87.246.225 <pending> 8089:31454/TCP 1m Run a watch loop while the internal passthrough Network Load Balancer's internal IP address (GKE external IP address) is provisioned for the Locust master web application Service: kubectl get svc locust-master-web --watch Press Ctrl+C to exit the watch loop once an EXTERNAL-IP address is provisioned. Connect to Locust web frontend You use the Locust master web interface to execute the load testing tasks against the system under test. Make a note of the internal load balancer IP address of the web host service: export INTERNAL_LB_IP=$(kubectl get svc locust-master-web \ -o jsonpath="{.status.loadBalancer.ingress[0].ip}") && \ echo $INTERNAL_LB_IP Depending on your network configuration, there are two ways that you can connect to the Locust web application through the provisioned IP address: Network routing. If your network is configured to allow routing from your workstation to your project VPC network, you can directly access the internal passthrough Network Load Balancer IP address from your workstation. Proxy & SSH tunnel. If there is not a network route between your workstation and your VPC network, you can route traffic to the internal passthrough Network Load Balancer's IP address by creating a Compute Engine instance with an nginx proxy and an SSH tunnel between your workstation and the instance. Network routingIf there is a route for network traffic between your workstation and your Google Cloud project VPC network, open your browser and then open the Locust master web interface. To open the Locust interface, go to the following URL: http://INTERNAL_LB_IP:8089 Replace INTERNAL_LB_IP with the URL and IP address that you noted in the previous step. Proxy & SSH tunnel Set an environment variable with the name of the instance. export PROXY_VM=locust-nginx-proxy Start an instance with a ngnix docker container configured to proxy the Locust web application port 8089 on the internal passthrough Network Load Balancer: gcloud compute instances create-with-container ${PROXY_VM} \ --zone ${ZONE} \ --container-image gcr.io/cloud-marketplace/google/nginx1:latest \ --container-mount-host-path=host-path=/tmp/server.conf,mount-path=/etc/nginx/conf.d/default.conf \ --metadata=startup-script="#! /bin/bash cat <<EOF > /tmp/server.conf server { listen 8089; location / { proxy_pass http://${INTERNAL_LB_IP}:8089; } } EOF" Open an SSH tunnel from Cloud Shell to the proxy instance: gcloud compute ssh --zone ${ZONE} ${PROXY_VM} \ -- -N -L 8089:localhost:8089 Click the Web Preview icon (), and select Change Port from the options listed. On the Change Preview Port dialog, enter 8089 in the Port Number field, and select Change and Preview. In a moment, a browser tab will open with the Locust web interface. Run a basic load test on your sample application After you open the Locust frontend in your browser, you see a dialog that can be used to start a new load test. Specify the total Number of users (peak concurrency) as 10 and the Spawn rate (users started/second) as 5 users per second. Click Start swarming to begin the simulation. After requests start swarming, statistics begin to aggregate for simulation metrics, such as the number of requests and requests per second, as shown in the following image: View the deployed service and other metrics from the Google Cloud console. Note: After the swarm test, it might take the App Engine dashboard several minutes to show the metrics. When you have observed the behavior of the application under test, click Stop to terminate the test. Scale up the number of users (optional) If you want to test increased load on the application, you can add simulated users. Before you can add simulated users, you must ensure that there are enough resources to support the increase in load. With Google Cloud, you can add Locust worker Pods to the Deployment without redeploying the existing Pods, as long as you have the underlying VM resources to support an increased number of Pods. The initial GKE cluster starts with 3 nodes and can auto-scale up to 10 nodes. Scale the pool of Locust worker Pods to 20. kubectl scale deployment/locust-worker --replicas=20 It takes a few minutes to deploy and start the new Pods. If you see a Pod Unschedulable error, you must add more nodes to the cluster. For details, see resizing a GKE cluster. After the Pods start, return to the Locust master web interface and restart load testing. Extend the pattern To extend this pattern, you can create new Locust tasks or even switch to a different load testing framework. You can customize the metrics you collect. For example, you might want to measure the requests per second, or monitor the response latency as load increases, or check the response failure rates and types of errors. For information, see the Cloud Monitoring documentation. Clean up To avoid incurring charges to your Google Cloud account for the resources used in this document, either delete the project that contains the resources, or keep the project and delete the individual resources. Delete the project Caution: Deleting a project has the following effects: Everything in the project is deleted. If you used an existing project for the tasks in this document, when you delete it, you also delete any other work you've done in the project. Custom project IDs are lost. When you created this project, you might have created a custom project ID that you want to use in the future. To preserve the URLs that use the project ID, such as an appspot.com URL, delete selected resources inside the project instead of deleting the whole project. If you plan to explore multiple architectures, tutorials, or quickstarts, reusing projects can help you avoid exceeding project quota limits. In the Google Cloud console, go to the Manage resources page. Go to Manage resources In the project list, select the project that you want to delete, and then click Delete. In the dialog, type the project ID, and then click Shut down to delete the project. Delete the GKE cluster If you don't want to delete the whole project, run the following command to delete the GKE cluster: gcloud container clusters delete ${GKE_CLUSTER} --region ${REGION} What's next Building Scalable and Resilient Web Applications. Review GKE documentation in more detail. Try tutorials on GKE. For more reference architectures, diagrams, and best practices, explore the Cloud Architecture Center. Send feedback |
URL: https://cloud.google.com/architecture/framework/reliability/perform-testing-for-recovery-from-data-loss |
Date Scraped: 2025-02-23T11:43:40.364Z |
Content: |
Home Docs Cloud Architecture Center Send feedback Perform testing for recovery from data loss Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-12-30 UTC This principle in the reliability pillar of the Google Cloud Architecture Framework provides recommendations to help you design and run tests for recovery from data loss. This principle is relevant to the learning focus area of reliability. Principle overview To ensure that your system can recover from situations where data is lost or corrupted, you need to run tests for those scenarios. Instances of data loss might be caused by a software bug or some type of natural disaster. After such events, you need to restore data from backups and bring all of the services back up again by using the freshly restored data. We recommend that you use three criteria to judge the success or failure of this type of recovery test: data integrity, recovery time objective (RTO), and recovery point objective (RPO). For details about the RTO and RPO metrics, see Basics of DR planning. The goal of data restoration testing is to periodically verify that your organization can continue to meet business continuity requirements. Besides measuring RTO and RPO, a data restoration test must include testing of the entire application stack and all the critical infrastructure services with the restored data. This is necessary to confirm that the entire deployed application works correctly in the test environment. Recommendations When you design and run tests for recovering from data loss, consider the recommendations in the following subsections. Verify backup consistency and test restoration processes You need to verify that your backups contain consistent and usable snapshots of data that you can restore to immediately bring applications back into service. To validate data integrity, set up automated consistency checks to run after each backup. To test backups, restore them in a non-production environment. To ensure your backups can be restored efficiently and that the restored data meets application requirements, regularly simulate data recovery scenarios. Document the steps for data restoration, and train your teams to execute the steps effectively during a failure. Schedule regular and frequent backups To minimize data loss during restoration and to meet RPO targets, it's essential to have regularly scheduled backups. Establish a backup frequency that aligns with your RPO. For example, if your RPO is 15 minutes, schedule backups to run at least every 15 minutes. Optimize the backup intervals to reduce the risk of data loss. Use Google Cloud tools like Cloud Storage, Cloud SQL automated backups, or Spanner backups to schedule and manage backups. For critical applications, use near-continuous backup solutions like point-in-time recovery (PITR) for Cloud SQL or incremental backups for large datasets. Define and monitor RPO Set a clear RPO based on your business needs, and monitor adherence to the RPO. If backup intervals exceed the defined RPO, use Cloud Monitoring to set up alerts. Monitor backup health Use Google Cloud Backup and DR service or similar tools to track the health of your backups and confirm that they are stored in secure and reliable locations. Ensure that the backups are replicated across multiple regions for added resilience. Plan for scenarios beyond backup Combine backups with disaster recovery strategies like active-active failover setups or cross-region replication for improved recovery time in extreme cases. For more information, see Disaster recovery planning guide. Previous arrow_back Perform testing for recovery from failures Next Conduct thorough postmortems arrow_forward Send feedback |
URL: https://cloud.google.com/architecture/framework/reliability/perform-testing-for-recovery-from-failures |
Date Scraped: 2025-02-23T11:43:37.864Z |
Content: |
Home Docs Cloud Architecture Center Send feedback Perform testing for recovery from failures Stay organized with collections Save and categorize content based on your preferences. Last reviewed 2024-12-30 UTC This principle in the reliability pillar of the Google Cloud Architecture Framework provides recommendations to help you design and run tests for recovery in the event of failures. This principle is relevant to the learning focus area of reliability. Principle overview To be sure that your system can recover from failures, you must periodically run tests that include regional failovers, release rollbacks, and data restoration from backups. This testing helps you to practice responses to events that pose major risks to reliability, such as the outage of an entire region. This testing also helps you verify that your system behaves as intended during a disruption. In the unlikely event of an entire region going down, you need to fail over all traffic to another region. During normal operation of your workload, when data is modified, it needs to be synchronized from the primary region to the failover region. You need to verify that the replicated data is always very recent, so that users don't experience data loss or session breakage. The load balancing system must also be able to shift traffic to the failover region at any time without service interruptions. To minimize downtime after a regional outage, operations engineers also need to be able to manually and efficiently shift user traffic away from a region, in as less time as possible. This operation is sometimes called draining a region, which means you stop the inbound traffic to the region and move all the traffic elsewhere. Recommendations When you design and run tests for failure recovery, consider the recommendations in the following subsections. Define the testing objectives and scope Clearly define what you want to achieve from the testing. For example, your objectives can include the following: Validate the recovery time objective (RTO) and the recovery point objective (RPO). For details, see Basics of DR planning. Assess system resilience and fault tolerance under various failure scenarios. Test the effectiveness of automated failover mechanisms. Decide which components, services, or regions are in the testing scope. The scope can include specific application tiers like the frontend, backend, and database, or it can include specific Google Cloud resources like Cloud SQL instances or GKE clusters. The scope must also specify any external dependencies, such as third-party APIs or cloud interconnections. Prepare the environment for testing Choose an appropriate environment, preferably a staging or sandbox environment that replicates your production setup. If you conduct the test in production, ensure that you have safety measures ready, like automated monitoring and manual rollback procedures. Create a backup plan. Take snapshots or backups of critical databases and services to prevent data loss during the test. Ensure that your team is prepared to do manual interventions if the automated failover mechanisms fail. To prevent test disruptions, ensure that your IAM roles, policies, and failover configurations are correctly set up. Verify that the necessary permissions are in place for the test tools and scripts. Inform stakeholders, including operations, DevOps, and application owners, about the test schedule, scope, and potential impact. Provide stakeholders with an estimated timeline and the expected behaviors during the test. Simulate failure scenarios Plan and execute failures by using tools like Chaos Monkey. You can use custom scripts to simulate failures of critical services such as a shutdown of a primary node in a multi-zone GKE cluster or a disabled Cloud SQL instance. You can also use scripts to simulate a region-wide network outage by using firewall rules or API restrictions based on your scope of test. Gradually escalate the failure scenarios to observe system behavior under various conditions. Introduce load testing alongside failure scenarios to replicate real-world usage during outages. Test cascading failure impacts, such as how frontend systems behave when backend services are unavailable. To validate configuration changes and to assess the system's resilience against human errors, test scenarios that involve misconfigurations. For example, run tests with incorrect DNS failover settings or incorrect IAM permissions. Monitor system behavior Monitor how load balancers, health checks, and other mechanisms reroute traffic. Use Google Cloud tools like Cloud Monitoring and Cloud Logging to capture metrics and events during the test. Observe changes in latency, error rates, and throughput during and after the failure simulation, and monitor the overall performance impact. Identify any degradation or inconsistencies in the user experience. Ensure that logs are generated and alerts are triggered for key events, such as service outages or failovers. Use this data to verify the effectiveness of your alerting and incident response systems. Verify recovery against your RTO and RPO Measure how long it takes for the system to resume normal operations after a failure, and then compare this data with the defined RTO and document any gaps. Ensure that data integrity and availability align with the RPO. To test database consistency, compare snapshots or backups of the database before and after a failure. Evaluate service restoration and confirm that all services are restored to a functional state with minimal user disruption. Document and analyze results Document each test step, failure scenario, and corresponding system behavior. Include timestamps, logs, and metrics for detailed analyses. Highlight bottlenecks, single points of failure, or unexpected behaviors observed during the test. To help prioritize fixes, categorize issues by severity and impact. Suggest improvements to the system architecture, failover mechanisms, or monitoring setups. Based on test findings, update any relevant failover policies and playbooks. Present a postmortem report to stakeholders. The report should summarize the outcomes, lessons learned, and next steps. For more information, see Conduct thorough postmortems. Iterate and improve To validate ongoing reliability and resilience, plan periodic testing (for example, quarterly). Run tests under different scenarios, including infrastructure changes, software updates, and increased traffic loads. Automate failover tests by using CI/CD pipelines to integrate reliability testing into your development lifecycle. During the postmortem, use feedback from stakeholders and end users to improve the test process and system resilience. Previous arrow_back Design for graceful degradation Next Perform testing for recovery from data loss arrow_forward Send feedback |
README.md exists but content is empty.
- Downloads last month
- 49