TL;DR: Your AI agent in Claude Code on the Web can use Google Cloud (or AWS/Azure) to store large datasets, run long computations, deploy web apps, and schedule recurring jobs. Once you have a cloud account and project, the repo-specific setup takes about five minutes:
- Set an encryption password in your environment settings (see Step 1 below). If you only use one cloud provider, name it
CLOUD_CREDENTIALS_KEY. For provider-specific setups, useGCP_CREDENTIALS_KEY/AWS_CREDENTIALS_KEY/AZURE_CREDENTIALS_KEY. - Tell the agent: "Install the cloud-bootstrap skill from https://github.com/ipeirotis/cloud-bootstrap into this repo."
- Tell the agent: "Set up GCP access for this project."
The agent walks you through the rest, including one command you run in Cloud Shell to generate a temporary token.
The moment I became a human copy-paster
A few weeks ago, I was debugging data issues on the mturk-tracker demographics site. Claude Code would write a diagnostic script. I would deploy it to the server. I would copy the output. I would paste it back into Claude. Claude would write the next script. I would deploy that one. Copy. Paste. Deploy. Copy. Paste. Deploy.
I was not managing an AI agent. I was its copy-paster. Claude did the thinking. I did the Ctrl-C, Ctrl-V.
That was problem number one.
Problem number two: I needed to collect data from several websites, a process that would take a day or two of continuous scraping. Claude started the work, but the sandbox kept timing out. The session would die, I would restart it, Claude would pick up where it left off, and then the session would die again. The only way to keep things moving was to babysit: Break the bigger task into smaller subtasks and then "Do next task." "Do next task." "Do next task." Over and over. I was not reviewing or directing anything. I was just pressing the button to keep the machine running. I understand that this is our new role as humans, serving our new AI overlords, but... boooooring.
Problem number three: I needed to train a model that required a GPU. The Claude Code sandbox does not have GPUs. So I had to manually launch a VM on Google Cloud, SSH into it, clone the repo, install the dependencies, start the training, and then remember to check back later and shut the machine down before it burned through my budget. Claude had written all the training code. But the last mile (getting it to actually run somewhere with the right hardware) was entirely on me. The AI writes the code. The GPU does the math. And I am the guy who forgets to shut down the machine. Guess which component has the highest error rate.
Three different problems. Same root cause. The sandbox is a walled garden. Claude can think, it can code, it can analyze. But it cannot reach the outside world. It cannot talk to a server, run something overnight, or spin up a machine with a GPU. Everything that requires infrastructure beyond a small ephemeral container? That is your job.
The fix: give the agent a cloud account.
What changes once the agent has cloud access
Remember the mturk-tracker debugging? With cloud access, Claude deploys its own diagnostic scripts to Cloud Functions, runs them against the live data, reads the results, and iterates. No copying. No pasting. No human in the middle.
The web scraping that required me to babysit? Claude deploys the scraper as a Cloud Function with a scheduler. It runs every 15 minutes, stores results in a Cloud Storage bucket, and I check in the next day. I literally went to sleep and woke up with the data collected.
The GPU training? Claude launches a VM with the right specs (say, an n1-standard-4 with a T4 GPU), clones the repo, installs everything, starts training, and sets up a shutdown script that kills the machine when the job finishes. Results go to Cloud Storage. I went to dinner. When I came back, the model was trained, the results were in the bucket, and the VM was already off. The alternative was me manually SSH-ing into a machine, running htop every twenty minutes, and hoping I remembered to shut it down before I went to bed. (Ask me how I know that "hoping I remember to shut it down" is not a reliable cost management strategy.)
The setup (yes, there is some setup)
I will walk through this using Google Cloud, since that is what I use (the concepts are the same for AWS and Azure). If you do not already have a Google Cloud account, go to cloud.google.com and sign up.
Once you have an account, create a project in the Cloud Console. A project is Google Cloud's way of organizing resources and billing. Click the project dropdown at the top, click "New Project," give it a name, and note the project ID.
You do not need to install anything on your own computer. When you need to generate a token, you will use Google Cloud Shell: a browser-based terminal with everything pre-installed.
My pattern: one repo, one cloud project, same name
Every GitHub repo I work with gets its own dedicated Google Cloud project. And they get the same name. The repo paper-oral-exams gets the Cloud project paper-oral-exams. The repo course-ai-pm gets the Cloud project course-ai-pm.
Why? Mostly resource isolation. The agent for the course repo cannot accidentally touch the research data. Each agent gets exactly the access it needs for its own project and nothing else. It also makes housekeeping easier: when everything for a project lives in one Cloud project, you can quickly spot which storage buckets, databases, and VMs are still needed and which are leftovers. No more "wait, whose VM is this and why is it still running?"
Creating a Cloud project is free and takes 30 seconds.
Service accounts: giving the agent its own keys (not yours)
When you use Google Cloud, you log in with your Google account. But an AI agent is not you. And more importantly, it should not be you. Your Google account has access to everything: your email, your billing, your entire cloud infrastructure. Giving all of that to an automated tool would be like handing your intern the keys to the building, your credit card, and your Netflix password. Just in case.
Instead, you give the agent a service account: a restricted identity designed specifically for automated tools. It has its own email address (something like claude-agent@my-project.iam.gserviceaccount.com) and you decide exactly what it can do. Read from this storage bucket. Deploy this function. Query this database. Nothing more.
A caveat: the approach below (encrypting a service account key in the repo) is a pragmatic workaround for agent environments that do not yet support proper workload identity or secret stores. If the worst case is 'the agent ran up a $200 bill on a research project,' you are fine. If the worst case involves production data or your personal credentials, use something else. When proper agent identity federation exists, this will get simpler. For now, it is the best approximation available.
The service account authenticates using a key file: a JSON file that acts as its password. Whoever has this file can act as the service account. Which means this file needs to be protected.
But here is the catch: Claude Code runs in a sandbox that resets after each session. The only thing that persists is the GitHub repo. So the key file needs to live in the repo somehow, but committing a plaintext credentials file to a repo is a classic security mistake. (It is so common that GitHub literally has automated scanning to catch people doing it.)
The solution: encrypt the key file and commit the encrypted version. The encryption password lives in an environment variable in Claude Code, which persists across sessions but never enters the repo. At the start of each session, a hook decrypts, authenticates, and deletes the plaintext immediately. The encrypted file is useless without the password. The password is useless without the encrypted file. And if in the worst case scenario your password leaks, you only exposed the service account with limited permissions, and you can always deprecate and regenerate the credentials.
The five-minute walkthrough
You do this once per repo.
Step 1: Set your encryption password.
In Claude Code, open the environment settings for your session and find the "Environment Variables" field. Add a new variable:
CLOUD_CREDENTIALS_KEY=your-strong-passphrase-here
(If you work with multiple cloud providers across different repos, you can use provider-specific names like GCP_CREDENTIALS_KEY or AWS_CREDENTIALS_KEY instead.)
A caveat: Claude Code currently warns against putting secrets in environment variables because there is no dedicated secrets store yet. I am using this approach because the passphrase only protects an already-restricted service account, not your personal cloud credentials. When a proper secrets store ships, this workflow will use it.
Step 2: Install the skill.
Open your repo in Claude Code and tell the agent:
"Install the cloud-bootstrap skill from https://github.com/ipeirotis/cloud-bootstrap into this repo."
(For those comfortable with a terminal, you can also run curl -sSL https://raw.githubusercontent.com/ipeirotis/cloud-bootstrap/main/install.sh | bash in any environment with access to the repo.)
Step 3: Tell the agent to set up cloud access.
"Set up GCP access for this project."
The agent will ask you for your Google Cloud project ID. Then it will look at your repo and propose a set of minimum permissions: "Based on this repo, I think the service account needs access to Cloud Storage and BigQuery. Here is why. Shall I proceed?" You approve or adjust. For a new or empty repo, it will ask what you plan to do first.
Then the agent will ask you to run a command in Cloud Shell. To open it, go to shell.cloud.google.com or click the ">_" icon in the top-right of the Cloud Console. Make sure you are in the right project, and run:
gcloud auth print-access-token
You paste the result back. This gives the agent a temporary token (valid for one hour) to do the initial setup. The agent creates the service account, grants the approved permissions, generates a key, encrypts it, commits the encrypted file, and sets up an automatic authentication hook for future sessions. The temporary token expires. From this point on, every new session starts fully authenticated. You just start working.
For teams: each person gets their own encrypted key file with their own password. The README has the details.
What to do once the ceiling is gone
Once cloud access is set up, the agent will start proactively suggesting cloud improvements when it notices opportunities: "Would it help if I moved this dataset to BigQuery so we do not have to re-process it every session?" You can also prompt this explicitly: "Can you improve your process, knowing that you have access to GCP?"
I had a dataset too large to fit in the sandbox. The agent uploaded it to BigQuery. Now I query it conversationally: "Show me the distribution of response times by condition." The agent writes the SQL, runs it, brings back the results. The data lives in the cloud permanently. No re-uploading, no re-processing.
I needed to run a survey for a research study. The agent deployed a Cloud Function with a simple web form, backed by a database. Participants visit a URL, submit responses, the data lands in a table I can query later. No server to manage. No hosting to configure. Thirty minutes from "I need a survey" to a live URL that participants were already clicking on. I still have not fully processed how absurd that is.
What does it cost? Less than you might think. Cloud Functions and BigQuery queries cost cents per run. A T4 GPU VM runs about $0.35/hour. My monthly bill for all of this is usually under $10, though a long GPU job will cost more. One practical tip: set up a billing budget alert in Google Cloud before giving the agent access. Agents can get stuck in loops, and a $10 budget alert is cheaper than finding out the hard way.
The bigger picture: finding the next loop to close
There is a trajectory here worth naming. First, the AI learned to generate: write a script, draft a document, produce code. Then it learned to execute: run the script, push the changes, create a pull request. Now it is learning autonomy: spin up a server, run the job, shut down the server, and report back. Each step closes a loop where a human used to be the connector.
The previous post gave the agent memory and a workflow. This one gives it infrastructure. Same pattern: every time you find yourself doing grunt work to connect two things that the agent should be able to connect on its own, that is a loop waiting to be closed.
What comes next
The cloud-bootstrap skill supports GCP, AWS, and Azure. It handles first-time setup, adding team members, and credential rotation (it tracks credential age and warns you after six months). It also supports multi-provider setups in the same repo and handles permission escalation gracefully: if the agent hits a permission wall, it stops and tells you exactly what role it needs and why. It never silently fails, and it never gives itself more access.
This is still early. The whole approach (encrypting credentials in a repo, pasting short-lived tokens) is a workaround, as I noted above. When proper agent identity federation arrives, this will simplify considerably. But right now it works, and for isolated research projects with tightly scoped permissions, the risk is manageable.
But the agent can still only work inside the one repo it is connected to. It cannot clone a second repo, pull in a dataset from another project, or push results somewhere a collaborator can see. It can work inside one room but cannot walk between rooms. The next post will fix that: installing gh and setting up a GitHub personal access token so the agent can move freely across repos. It is a much shorter setup than this one.
After that: the "master repo, satellite repos" setup for coordinating work across multiple projects (which needs the GitHub token to work), MCP configuration for integrating Gmail and Google Calendar, and more on the "council of LLMs" approach I have been using for grading oral exams and for reviewing my work.
But start here. Give the agent a cloud account. And then go to dinner. When you get back, the agent will have finished collecting data, training the model, shut down the GPU VM, clean up everything, and gone to sleep. Your kids, on the other hand, if they are like mine, will still be awake and making fun of the parental controls on their iPads, and the kitchen is a mess.