This blog is the third post in my series “From Solution Architect to AI Architect,” where I share my own learning journey, the courses I take, the concepts that actually matter, and the skills I’m building along the way. My goal is to understand how AI is reshaping our role as architects and what we need to stay relevant and effective.
In this post, I’m recapping my experience with the Generative AI for Everyone course. I’m not trying to rewrite the entire course or cover every topic. Instead, I’m focusing on what I think truly matters: the ideas that change how we design, reason, and build systems with AI.
I’ll go module by module, but always through an architect’s lens.
Module 1 – How to Think About Generative AI
One of the most valuable things I took from Module 1 is a simple mindset shift: when you interact with an LLM, the quality of your instructions is everything. Instead of expecting magic, I focus on giving it a clear context, constraints, formats, and even small examples. The more deliberate I am with my instructions, the better the model performs.
I now tell the model who it should act as, who the audience is, what tone I want, what structure I expect, and what the final output should look like. It’s a lot like handing someone a well-defined task instead of a vague request — and the improvement in the output is immediate.
Module 1 also shows how LLMs can summarize long documents, rewrite text in any style you want, or help you brainstorm new ideas. But they’re also prone to mistakes, especially when prompts are ambiguous or when the task requires precise facts or mathematical accuracy.
A small example from the course stuck with me. When you provide a clear instruction like “summarize this customer review in one sentence, focusing only on sentiment,” the model behaves reliably. But without that precision, it might add details, make assumptions, or drift into interpretations you didn’t want.
Module 1 reinforced the idea that prompting is an iterative, empirical process. You try something, observe the output, refine the instruction, and improve the result step by step.
If I had to summarize Module 1 from an architect’s point of view, it would be this: LLMs are powerful reasoning tools when you guide them well, but they need structure, clarity, and context.
Module 2 – Generative AI Projects
Module 1 covered using LLMs via a simple web interface. Module 2 goes deeper and asks a more practical question: how do we actually build AI-powered products?
The main takeaway for me is that generative AI has dramatically lowered the barrier to building AI applications. Andrew uses a restaurant review sentiment example that makes the difference clear. Before LLMs, you had to collect labelled data, train a model, deploy it, and monitor it, a process that could take months and require a whole ML team. With LLMs, you can now get a “good enough” sentiment classifier in minutes with a prompt and a single API call.
As an architect, this changes how I think about feasibility. Many features that used to be too expensive or too slow to experiment with are now quick prototypes. The real question becomes: does this add value, and how does it fit into the workflow?
Module 2 also introduces three key technical levers beyond basic prompting.
The first is RAG, Retrieval Augmented Generation. Instead of hoping the model “knows” your internal policies or documentation, you retrieve relevant content and feed it into the prompt. It shifts the mindset from treating the model as a knowledge base to treating it as a reasoning engine over your own data. This is how internal Q&A bots, policy assistants, and knowledge search systems are actually built.
The second is fine-tuning. While RAG provides the model with new information at query time, fine-tuning changes the model’s behaviour by training it on examples. It’s useful when you want consistent style or domain-specific patterns, like customer service summaries or legal wording. For me, the real architectural question is not “What is fine-tuning?” but “When does it make more sense than RAG or prompting?”
The third is pretraining, essentially training your own LLM from scratch. The course presents it as a last resort because it’s costly and usually relevant only to very large companies. For most teams, the practical path is to use an existing model, add RAG, optionally fine-tune, and focus on the product, not the underlying transformer architecture.
Module 2 also touches on tool use. This is already very practical: the model can request actions like calling a calculator, searching an API, or interacting with an internal system, and your application carries them out.
Finally, I like the project lifecycle Andrew describes because it feels familiar to software architects. You scope the project and define what “good enough” means. You build a quick prototype, often in days. You test internally and find weird edge cases and failure modes. You iterate on prompts, RAG setup, or fine-tuning. You deploy, monitor, collect more real examples, and keep improving. In many ways, building generative AI systems is still software development, just with faster iteration loops.
Module 3 – Generative AI in Business and Society
Module 3 zooms out from technology and looks at how generative AI affects businesses and teams. What stood out to me is that the most helpful way to think about AI in an organization is not in terms of jobs, but in terms of tasks.
Every job consists of multiple tasks, and these tasks vary in how well they lend themselves to automation or augmentation. For example, a customer service representative doesn’t “just handle support.” They respond to chat, look up orders, document interactions, and escalate issues. Some of these tasks are very “LLM-friendly,” while others still require human judgment. As an architect, this makes it much easier to spot high-value opportunities without imagining that AI should replace entire roles.
The module also shows how automation often leads to new workflows, not just cost savings. When a task becomes faster or cheaper, teams can suddenly try things they couldn’t before, like generating multiple marketing variations and running A/B tests immediately. In many cases, the biggest impact comes from rethinking the workflow, not just making one step more efficient.
Another important insight is that generative AI impacts knowledge work more than manual work. Roles in marketing, operations, legal, customer support, and software development are seeing the largest changes. But instead of “AI replacing people,” the more realistic pattern is the one we already see in radiology: professionals who use AI will outperform those who don’t.
The last part of Module 3 is a reminder that with all this power comes responsibility. As architects, we need to consider how AI affects people, workflows, privacy, and fairness. Responsible AI isn’t about perfection; it’s about being intentional in what we automate, how we use data, and how we communicate the limitations of the systems we build.
If I had to summarize Module 3 in one sentence, it would be this: the value of generative AI lies in understanding tasks, redesigning workflows, and being thoughtful about its impact on people.
Conclusion
If you’re considering taking “Generative AI for Everyone” yourself, I’d recommend it as a solid conceptual baseline. It won’t teach you how to implement RAG in code or fine-tune a model, but it does a great job explaining the core ideas you’ll keep coming back to as you learn more advanced techniques in the coming weeks. It’s the kind of course that gives you the mental models first, so the technical tools make more sense later.
It also helped clarify my own path from solution architect to AI architect. I feel less overwhelmed by terminology and hype and more focused on the principles that actually matter when designing AI-enabled systems. It didn’t just teach me concepts.
This course was one more step in building the foundation I need to grow into an AI architect… and I’m excited for what comes next.
You can read my previous blogs “From Solution Architect to AI Architect,” where I share my learning journey, the courses I take, the concepts that actually matter, and the skills I’m building along the way.
To explore practical ways to use prompts in architecture, check out my book Generative AI for Software Architects: How to Use LLMs to Boost Your Productivity.

