Professional Cloud DevOps Engineer Updated Questions | Killtest
Apr 30,2021
Not sure pass Professional Cloud DevOps Engineer exam successfully? Here, you are at right place. We just updated Professional Cloud DevOps Engineer exam questions, which can ensure you prepare for the test well. After passing Professional Cloud DevOps Engineer exam, you are skilled at using Google Cloud Platform to build software delivery pipelines, deploy and monitor services, and manage and learn from incidents. There are 52 Q&As in the updated Google Professional Cloud DevOps Engineer exam material, which are the best guides for you study all the related topics.
Google Professional Cloud DevOps Engineer Exam
Professional Cloud DevOps Engineer Exam Topics
List Professional Cloud DevOps Engineer exam topics below.
1. Applying site reliability engineering principles to a service
2. Building and implementing CI/CD pipelines for a service
3. Implementing service monitoring strategies
4. Optimizing service performance
5. Managing service incidents
Killtest new updated Google Professional Cloud DevOps Engineer exam questions can help you save much time in preparing this Google test. Share some Google Professional Cloud DevOps Engineer exam demo questions and answers below.
Your application images are built and pushed to Google Container Registry (GCR). You want to build an automated pipeline that deploys the application when the image is updated while minimizing the development effort. What should you do?
A. Use Cloud Build to trigger a Spinnaker pipeline.
B. Use Cloud Pub/Sub to trigger a Spinnaker pipeline.
C. Use a custom builder in Cloud Build to trigger a Jenkins pipeline.
D. Use Cloud Pub/Sub to trigger a custom deployment service running in Google Kubernetes Engine (GKE).
Answer: D
Your application services run in Google Kubernetes Engine (GKE). You want to make sure that only images from your centrally-managed Google Container Registry (GCR) image registry in the altostrat-images project can be deployed to the cluster while minimizing development time. What should you do?
A. Create a custom builder for Cloud Build that will only push images to gcr.io/altostrat-images.
B. Use a Binary Authorization policy that includes the whitelist name pattern gcr.io/attostrat-images/.
C. Add logic to the deployment pipeline to check that all manifests contain only images from gcr.io/altostrat-images.
D. Add a tag to each image in gcr.io/altostrat-images and check that this tag is present when the image is deployed.
Answer: A
You support a high-traffic web application and want to ensure that the home page loads in a timely manner. As a first step, you decide to implement a Service Level Indicator (SLI) to represent home page request latency with an acceptable page load time set to 100 ms. What is the Google-recommended way of calculating this SLI?
A. Buckelize Ihe request latencies into ranges, and then compute the percentile at 100 ms.
B. Bucketize the request latencies into ranges, and then compute the median and 90th percentiles.
C. Count the number of home page requests that load in under 100 ms, and then divide by the total number of home page requests.
D. Count the number of home page requests that load in under 100 ms. and then divide by the total number of all web application requests.
Answer: A
Your organization recently adopted a container-based workflow for application development. Your team develops numerous applications that are deployed continuously through an automated build pipeline to a Kubernetes cluster in the production environment. The security auditor is concerned that developers or operators could circumvent automated testing and push code changes to production without approval. What should you do to enforce approvals?
A. Configure the build system with protected branches that require pull request approval.
B. Use an Admission Controller to verify that incoming requests originate from approved sources.
C. Leverage Kubernetes Role-Based Access Control (RBAC) to restrict access to only approved users.
D. Enable binary authorization inside the Kubernetes cluster and configure the build pipeline as an attestor.
Answer: A
You support an application that stores product information in cached memory. For every cache miss, an entry is logged in Stackdriver Logging. You want to visualize how often a cache miss happens over time. What should you do?
A. Link Stackdriver Logging as a source in Google Data Studio. Filler (he logs on the cache misses.
B. Configure Stackdriver Profiler to identify and visualize when the cache misses occur based on the logs.
C. Create a logs-based metric in Stackdriver Logging and a dashboard for that metric in Stackdriver Monitoring.
D. Configure BigOuery as a sink for Stackdriver Logging. Create a scheduled query to filter the cache miss logs and write them to a separate table
Answer: C