My route into AI and ML wasn't a single course or certification. It was three parallel tracks that ended up converging: Anthropic's courses on AI fluency and agent development which originally inspired me to look at Python, the Google Cloud Professional Machine Learning Engineer certification (built around Vertex AI), and latterly starting the AWS Certified Generative AI Developer track (centred on Amazon Bedrock). Each one filled a different gap, but Anthropic's work came first and set the foundation for how I approached the other two.

Anthropic: the framework

Anthropic's AI Fluency course provided something the certification tracks didn't: a framework for how to collaborate with AI effectively. The 4D model (Delegation, Description, Direction, Debugging) changed how I approach every AI interaction. It sounds abstract but it's genuinely practical; the Tetris project was built specifically to demonstrate the 4D approach in action.

Beyond AI Fluency, the courses on agent skills and sub-agents gave me a better understanding of how Claude thinks about multi-step tasks, tool use, and reasoning chains.

Claude Code itself has been the most impactful tool across all of this. It's where the majority of the hands-on building happens; the AI Coach app (post yet to be written up), the Tetris game, the portfolio sites, the dashboard, and this blog. Understanding how to work with it effectively is arguably more valuable than any single certification. Having the 4D framework in place before starting the certifications meant I was already thinking about AI collaboration properly, not just learning to call APIs.

It did also inspire me to improve my python skills as it was clear that was fundamental to developing with Claude; see 100 Days of Python.

Google Cloud ML Engineer

The Google certification covers the full machine learning lifecycle: feature engineering with BigQuery ML, end-to-end pipelines in Vertex AI, hyperparameter tuning with Vizier, model monitoring and drift detection, MLOps, and Vertex AI Pipelines. In addition to official learning content I completed fifteen labs (plus mini-labs) covering everything from custom prediction routines to shadow deployments.

The crash course notebooks alongside the labs were useful for building intuition around the fundamentals: linear regression, classification, fairness in ML. The data science stack from the Python course (Pandas, NumPy, scikit-learn) paid off here; I wasn't learning the tools and the concepts simultaneously.

The cert itself is broad. It expects you to know when to use AutoML vs custom training, how to structure ML pipelines, and how to monitor models in production. Vertex AI is the central service; nearly everything routes through it. The exam was challenging but the hands-on labs made a real difference; I'd have struggled with theory alone.

AWS Generative AI Developer

The AWS track is fundamentally different from the Google cert and still a work-in-progress. The GCP certification focuses on training, deploying, and monitoring models. The AWS Generative AI Developer certification focuses on integrating foundation models: choosing the right one, wiring it up with RAG and agents, securing it, and optimising cost. Model training is explicitly out of scope.

The central service is Amazon Bedrock rather than Vertex AI. My own learning plan (outside of official content) covers nine labs and three integration projects: Bedrock fundamentals, RAG with Knowledge Bases, prompt engineering and prompt flows, guardrails and content safety, agentic AI with Bedrock Agents, FM API integration patterns, security and governance, deployment optimisation, and evaluation and testing.

One practical output of this track is the AI Coach app; a multi-persona advisory application built entirely on AWS. It uses React on Amplify, Cognito for authentication, Lambda for streaming responses, DynamoDB for conversation history, and Bedrock with Claude for the AI layer. Bedrock Knowledge Bases provide RAG with auto-chunking. The whole thing is Terraform-managed with GitHub Actions CI/CD. It's the biggest application I've built from scratch and it brought together nearly everything from both certification tracks.

How it all connects

Looking back, the three tracks were more complementary than I expected. Anthropic taught me how to collaborate with AI as a development partner; the 4D framework and Claude Code became the way I work. The Google cert taught me how ML works at a fundamental level: how models are trained, evaluated, and deployed on Vertex AI. The AWS track taught me how to integrate pre-trained foundation models into real applications using Bedrock.

Python was the foundation for all of it. The 100 Days of Code course gave me the language skills; the certifications gave me the platform knowledge; and the Anthropic tooling gave me the workflow. Each layer built on the one below it.

The next step is to add a Slack bot the the web project using Slack Bolt with Python, probably followed up with an iOS app of the site to try mobile dev.

The Google ML repo is on GitHub. The AWS Gen AI repo is to follow.