No description
Find a file
2026-03-14 16:53:13 -04:00
img.png first commit 2026-03-14 15:44:23 -04:00
img_1.png first commit 2026-03-14 15:44:23 -04:00
img_2.png first commit 2026-03-14 15:44:23 -04:00
img_3.png first commit 2026-03-14 15:44:23 -04:00
img_4.png first commit 2026-03-14 15:44:23 -04:00
img_5.png first commit 2026-03-14 15:44:23 -04:00
img_6.png first commit 2026-03-14 15:44:23 -04:00
img_7.png first commit 2026-03-14 15:44:23 -04:00
img_8.png first commit 2026-03-14 15:44:23 -04:00
img_9.png first commit 2026-03-14 15:44:23 -04:00
img_10.png first commit 2026-03-14 15:44:23 -04:00
img_11.png first commit 2026-03-14 15:44:23 -04:00
img_12.png first commit 2026-03-14 15:44:23 -04:00
img_13.png first commit 2026-03-14 15:44:23 -04:00
img_14.png first commit 2026-03-14 15:44:23 -04:00
img_15.png first commit 2026-03-14 15:44:23 -04:00
img_16.png first commit 2026-03-14 15:44:23 -04:00
img_17.png first commit 2026-03-14 15:44:23 -04:00
img_18.png finish lab exercise 2026-03-14 16:22:34 -04:00
img_19.png finish lab exercise 2026-03-14 16:22:34 -04:00
img_20.png finish lab exercise 2026-03-14 16:22:34 -04:00
img_21.png finish lab exercise 2026-03-14 16:22:34 -04:00
img_22.png finish lab exercise 2026-03-14 16:22:34 -04:00
img_23.png finish lab exercise 2026-03-14 16:22:34 -04:00
img_24.png finish lab exercise 2026-03-14 16:22:34 -04:00
img_25.png finish lab exercise 2026-03-14 16:22:34 -04:00
img_26.png finish lab exercise 2026-03-14 16:22:34 -04:00
img_27.png finish lab exercise 2026-03-14 16:22:34 -04:00
img_28.png finish lab exercise 2026-03-14 16:22:34 -04:00
img_29.png finish lab exercise 2026-03-14 16:22:34 -04:00
img_30.png finish lab exercise 2026-03-14 16:22:34 -04:00
img_31.png finish lab exercise 2026-03-14 16:22:34 -04:00
img_32.png finish lab exercise 2026-03-14 16:22:34 -04:00
img_33.png finish lab exercise 2026-03-14 16:22:34 -04:00
img_34.png finish lab exercise 2026-03-14 16:22:34 -04:00
img_35.png finish lab exercise 2026-03-14 16:22:34 -04:00
img_36.png finish lab exercise 2026-03-14 16:22:34 -04:00
img_37.png finish lab exercise 2026-03-14 16:22:34 -04:00
img_38.png finish lab exercise 2026-03-14 16:22:34 -04:00
img_39.png add reflection + diagram 2026-03-14 16:53:13 -04:00
README.md add reflection + diagram 2026-03-14 16:53:13 -04:00

Lab 5

Task 1

Ensuring project ID is set so commands run in the right project (for billing and such): img.png

Enabling APIs so we can access the resources (Cloud Resource Manager, Artifact Registry, Cloud Run, and Cloud Build): img_1.png

Ensuring we have necessary ART_REG environment variable:

img_2.png

Ensuring Artifact Registry is accessible and working by listing our previous images from other labs: img_3.png

Making and entering the lab 5 directory (using my new favorite Bash shortcuts!): img_4.png

Cloning the repo, finding out Cloud shell doesn't come with tree (?) and verifying clone success: img_5.png

Creating and verifying .venv creation (as to not install packages globally: img_6.png

Entering and confirming .venv environment: img_7.png

Installing gRPC tools:

img_8.png

Creating restore (with minor environment changes) script and sourcing it: img_9.png

Task 2

Verifying proto contract that both services will use and generating gRPC stubs: img_10.png

Ensuring the stubs got copied into both services directory as they both use it: img_11.png

Verifying that the engine implements Convert (what actually runs the conversion on the request) and that the API that validates input so the engine can focus on conversion:

img_12.png

Task 3

Reviewing Dockerfile:

img_13.png

Building and pushing with Cloud Build to make it available in our Artifact Registry (for Cloud Run): img_14.png

Deploying to serverless Cloud Run: img_15.png

Saving the URL of the service for later use: img_16.png

Calling the service by the saved URL (but getting rejected by the Cloud Run edge auth checker): img_17.png

Task 4

Reviewing the Dockerfile:

img_18.png

Building and pushing with Cloud Build so we can again use it in Cloud Run:

img_19.png

Deploying to Cloud Run:

img_20.png

Saving URL and ensuring Cloud Run container is up and available: img_21.png

Task 5

Getting the converter API service and modifying it so it has invoker rights to call the engine:

img_22.png

Ensuring the converter API can access the engine:

img_23.png

Task 6

Health checking to ensure converter API is up:

img_24.png

Listing supported units:

img_25.png

Performing a length conversion (which will have the API trigger a gRPC call):

img_26.png

Performing temperature conversion:

img_27.png

Multiple tests:

img_28.png

Test rejection if query isn't valid:

img_29.png

Trying to observe cold start. This was after manually setting the service's scaling to 0 and then back to auto 0 -> N so I know this is actualy boot up time (about a 10th of a second difference - impressive):

img_30.png

Cloud Run Overview: img_31.png

Ensuring the converter is publicly accessible:

img_33.png

Ensuring conversion engine requires authentication:

img_32.png

Checking converter API logs. Note that this confirms out .1s difference was coldstart:

img_34.png

Checking engine logs. Same thing with the coldstart. Note it autoscaled at 16:02:19.138 and responded to the call at 16:02:19.138:

img_35.png

Checking Artifact Registry images:

img_36.png

Setting engine's tag to v1:

img_37.png

Cleaning up by removing the Cloud Run services:

img_38.png

Reflection

  1. Unit conversion is stateless as its output requires solely on its input. It requires no auth checks, no DB queries, nothing. This makes it well suited to an auto-scaled service as it means any instance can handle any request without worrying about state or handling multiple backends. No state synchronization is needed.
    1. The REST interface is defined by the API handler in server.py. It parses incoming requests and issues a gRPC call via an internal service account to an engine.
    2. That gRPC interface itself is defined in the proto/ folder, which has the agreed-upon structure of requests. When generated, it is copied to each service's folder so they can both access and use it.
    3. The intended audience for the REST API is anyone in the public. The gRPC interface is only for authenticated internal traffic and cannot be used by anyone except the converter API.
    4. It's useful to separate these interfaces as they can be independently deployed, updated and scaled without taking the entire system down.
  2. The API validates before calling the engine so that they can each have only one responsibility. This means the engine isn't being bogged down with erroneous requests, and ensures they are seperated in function. The gRPC request, once sent, should be authoritative and error free else it could cause undefined behavior (the engine may not be designed to withstand errors).
  3. The API proves it's identity by requesting a short-lived token from Google's metadata servers. This token represents trust, and it will be checked by the engine to ensure it can be trusted. Before forwarding the call, Cloud Run checks it's source, ensuring it has proper roles and permissions. This is considered zero trust as every thing is checked every step of the way. There is no caching of tokens. Every request, the requester must prove their identity authoritatively.
  4. Cold starts refer to the first request when a service is scaled to zero during idle time. This system needs two cold starts, first the API, then the engine. Cold start time is largely determined by the app. A webserver will take a lot longer than this tiny API and engine. Turning the containers off reduces usage in the downtime. Serverless containers are also able to attach to arbitrary VPCs to reduce DB latency, which isn't touched on in this lab but is another plus to justify scaling to 0. Additionally, the scaling can be turned off if completely undesirable, and a service can be kept to a minimum of 1 container.
  5. The generate files are not stored next to the stubs as the stubs are generated files for use by each service. They each need to be placed in the service's directory.
  6. This would be fairly easy to decide. Validation (more checks) would be solely API side. The proto contract should only be modified if new data needs to be sent between the API and engine. The engine will only be updated with new features and conversions. While a large change may require touching all of these, the usage of them ensures single responsibilities and distinct service boundaries.

Diagram

The diagram shows how the client queries the API. The API sends a gRPC conversion request via a shared contract to the conversion engine (after authentication, not pictured for brevity) and how the engine returns the result to the API and then the client. The Cloud Run services themselves are inherently scalable without any configuration, so the frontend and backend can both scale or be updated independently. They both pull images fro the Artifact Repository.

img_39.png

@startuml
title Lab 5 Architecture

actor Client

cloud "Google Cloud Run Platform" {
  component "Converter API" as API
  component "Conversion Engine" as ENGINE
}

database "Artifact Registry" as AR

Client --> API : HTTPS /convert

API --> ENGINE : ConversionRequest

ENGINE --> API : ConversionResult

API --> Client : JSON

AR --> API : converter-api:v1
AR --> ENGINE : conversion-engine:v1

@enduml