Agents Bootcamp - Anniversary Reflection
It's been almost a year since we started our bootcamp and it has changed so much!
I have worked in AI for the majority of the past decade as a hands-on-keyboard coder, corporate manager, and founder. One lesson is quite clear across the board: there's a massive gap between theory and practice. Even the first product we launched at Aggregate Intellect in 2020, way before most people had heard of language modelling, was to use AI to reduce the so-called “translation gap” between conceptual and activated, practical knowledge. The n-th iteration of that product eventually got killed by ChatGPT in 2022, but the problem is still around and more pronounced than ever.
We've all seen those flashy linkedin posts and twitter threads about agentic systems doing all kinds of fun and productive things. As is the norm on social media, however, most of them fail to show the complexities that go into creating a system like that for real world use. Most of this is because they’re probably just cute demos, but some of it might also be because showing the shiny result gets many more likes than the messy hustle along the way. They make it sound straightforward, but anyone who's attempted to build one in production knows the truth: building a robust, well-behaved agentic system is incredibly challenging.
So, now the question is, what do you have to do if you want to build something more serious than a simple demo that doesn’t withstand any rigorous evaluation?
This disconnect is precisely what prompted the beginning of the journey of what eventually became our bootcamp, "Build Multi-Agent Applications”.
Why Traditional Learning Falls Short
The problem with most educational content in general, and LLM systems specifically, is that it focuses on concepts rather than implementation challenges. Given what social media rewards, content creators try hard to balance education and entertainment at best, and fail to even have any meaningfully useful practical content in most cases.
At the end of the day, nothing teaches you like struggling until that magical moment of wrapping your head around it. You can read dozens of articles about agent architectures, but they won't prepare you for the obstacles you get to see while building:
Finding the right scope between ideas that are too big and fluffy or too small and boring
Dealing with unreliable tool integrations that introduce unexpected failure points
Navigating complex cloud infrastructure that requires specialized knowledge
Building product evals while designing and running experiments
Prompt engineering that works in playgrounds but fails in production
These aren't theoretical problems – they're practical engineering challenges that require hands-on experience to solve effectively.
Agent Development Lifecycle
Building complex AI systems isn't the neat, linear process that tutorials often make it out to be. In reality, it's messy, unpredictable, and requires constant adaptation. Whether you're rethinking your architecture at the last minute, troubleshooting broken integrations or trying to wrap your head around evaluating your agent’s performance, real-world development demands flexibility and problem-solving on the fly.
In the wild, this would be done in sprints that might take several months and often involves a lot of back and forth between various phases of design, development, deployment, and demo’ing. This structured chaos is a very efficient engineering process: an intense, fast-paced environment that reflects and tames the unpredictability of agentic product development.
In designing our bootcamp’s model we tried hard to mirror this reality. Our bootcamp is not a final product yet and we are still iterating but our goal remains the same: offering participants an authentic taste of real-world AI engineering in a contained and well structured sandbox.
When we first launched our bootcamp, we had a linear curriculum that walked participants through agent development step by step. That was easier to teach, easier to market, and probably easier for participants to “feel” they had achieved something. But the honest truth is that it wouldn't prepare them for building in the messy real world especially in a world that every hour there’s a new LLM or a framework that you could be using.
Over the past year, we spent time observing those who succeed in the program and tried to design around what the top percentiles of our participants do to thrive:
They were curious, scrappy and experimental
They asked A LOT OF questions
They enjoyed the structure as long as it didn’t limit their freedom to play
What did we do in response?
We curated all the theory stuff into a learning path they can learn from before the cohort and use as a reference during
We dropped the lecture style course and replaced it with a bootcamp style mentorship program
We onboarded several experienced assistants (some are our alumni) who participants can book 1 on 1 calls with to problem-solve and co-develop
“A mentorship model rooted in extensive experience is precisely what’s needed to build practical skills and achieve tangible outcomes”
~ Mykola, Bootcamp Testimonial
We built our program around a 3-sprint model - design, develop, deploy - that deliberately introduces the kinds of challenges, pivots, and iterative development cycles that characterize real-world engineering. And to give everyone a final rush, we cap this off with a final week of preparing for a public demo where we invite guests like experienced founders, investors, and corporate directors to give feedback to the teams.
“The instructors create an amazing environment for learning through a good 3-sprint structure, by inviting industry speakers and by being there themselves to help with all endeavors”
~ Sinan, Bootcamp Testimonials
The program is something like this:
Before Week 1: We run a free workshop and invite registered bootcamp participants and the broader community to join. For the broader community this is an opportunity to get to know the teaching staff and our approach. For bootcamp participants, this is an optional preparation period. We explain our ways, provide templates (see "Agents Playbook"), and help them build something really quick.
"I really like the curated material and how your team supports each other in presenting and handling all the questions that are thrown at you during the calls."
~ Richard, workshop participant
Weeks 1-2 (DESIGN): During these initial weeks, participants create detailed workflow diagrams and requirement documents. Our mentors repeatedly challenge them with variations of "why does this need an agent?" - a question that often leads to important realizations about where AI actually adds value.
"We had to write a requirements document before building our application which highlights important factors to consider when building an agentic application."
This design-first approach contradicts the typical developer instinct to jump straight into coding. But the reality is that premature implementation almost always leads to architectural problems that become exponentially harder to fix later. The design does not need to be and often is not perfect, but having the foundation that you can iterate on is super important in gaining velocity later.
Another important aspect of the first few weeks is team formation and getting to know the cohort participants. The participants’ immediate teams and the community formed by all the participants is the first and foremost layer of support and learning.
“I particularly like working within teams and the feedback from fellow bootcamp'ers.”
~ Mick, Bootcamp Testimonials
“It was also a fantastic opportunity to connect with and collaborate alongside experts in the field, making the [bootcamp] not only educational but also a great networking platform. “
~ Mykola, Bootcamp Testimonials
“I was also pleasantly surprised by the caliber of my peers in the cohort - the mix of expert instructors and highly motivated classmates makes this [bootcamp] a great investment for anyone looking to master agentic applications.”
~ Murtaza, Bootcamp Testimonials
The common challenges in this phase are: a) crafting the right scope for the project that is not too big or too trivial b) finding someone who has the pain point in question and can be used as the first app user for feedback and testing c) building the simplest version of the app in less than a day so that you can learn from it quickly. And the teaching staff are available via 1 on 1 calls to work through these with the teams.
“There is a lot going on in the field and the project helped staying grounded by focusing on the process of breaking the problem down and decomposing to tasks workflows and then grow towards agentic structures which was an amazing experience”
~ Jayant, Bootcamp Testimonials
Once you come out of these two weeks, what you’d learn, hopefully, is that good ideas don’t grow on trees but rather they are the outcome of an intentional, rigorous, and iterative process of experimentation, feedback, and learning.
Weeks 3-4 (DEVELOP): Once you lay the foundation with the work you do in sprint 1, participants get to wrestle with development issues, misbehaved prompts, and a lot of “wait, why is this not doing what i want” kind of moments. And hopefully these are followed by “ah, so that’s how you do it” moments in 1 on 1 conversations with the teaching staff.
“The instructors … gave valuable tips and feedback during the [bootcamp] and 1:1 meetings. I particularly liked the possibility of learning while building a product, and I enjoyed working with a small team”
~ Elena, Bootcamp Testimonials
The common challenges in this phase are:
Debugging code or no-code implementations
Creating good evaluation datasets that can actually help you iterate
Navigating the jungle of tools and frameworks that you might be able to use.
Deciding if I should use CrewAI, LangGraph, or build custom Python implementations
Discovering late in the development process that AI agents need systematic testing approaches
The hope is that you come out of this experience with a working solution that is evaluated and is starting to look like something that you can tame.
Weeks 5-6 (DEPLOY): By this point, teams have encountered API rate limits, implemented workarounds in Chainlit, and conducted late-night testing sessions.
"Setting up AWS infrastructure for NVIDIA GPUs is difficult. The learning curve is steep but necessary for building production-quality systems."
The common challenges are a) what’s the cheapest and fastest way to deploy this so that some early users can test it? b) how can I expand my evaluation and testing? c) how can I iterate on the implementation quickly to tame the behavior of the agent more?
Hopefully, you will come out of this sprint with a working app that is deployed and ready to be shown off!
Week 7 (DEMO): Each team delivers a seven-minute presentation to an audience that includes investors and industry professionals.
"The demos exceeded our own expectations. It is exciting to have my team interested in continuing to work on the project beyond the [bootcamp]. "
~ Maher, Bootcamp Testimonials
What's particularly interesting is how rarely projects follow a straight path. Some teams completely change their architecture halfway through. Others discover fundamental limitations in their chosen tech stack. This unpredictability serves a purpose: it prepares participants for the real challenges they'll face when developing AI systems after the bootcamp.
Real Projects Built by Real People
Now you ask: what kind of projects has this structure produced? Glad you asked! Let me highlight a couple of examples where alumni also walk us through their experiences:
Dungeon-Master Assistant ~ Sinan Ozel
Sinan joined with enthusiasm for Dungeons & Dragons and completed the bootcamp with a functioning multi-agent narrative generator running on dedicated GPUs. In his detailed Medium article, he warned about infrastructure challenges:
"You need to configure every component of a Kubernetes cluster, including the complicated IAM settings, before the model will even start working."
What's notable here is that the bootcamp empowers the participants to pursue technically complex projects and still finish on schedule, provided they carefully manage scope and prioritize features.
ReferWell ~ Mick Lynch
Mick tackled the challenging area of physician-to-specialist referrals by creating an agent system combining healthcare data standards (FHIR), vector search, and human verification steps. He identified specific problems in healthcare:
"The current referral process suffers from specialist mismatches and loses up to 70% of patient data. Our agent workflow maintains complete context throughout the patient journey."
While his final demo showed prototype-level functionality, it represented a practical application with potential for further development into production-ready software. Pshh, don’t tell anyone, but we might have even facilitated a meeting for them with an angel investor who saw their demo!
Check out the video playlist of the public demos.
Who Should Consider This Approach?
Based on the patterns I've observed in successful participants, this intensive bootcamp model works best for people who:
Are willing to examine and iterate on their idea to identify specific workflows that could benefit from automation
Are comfortable sharing unfinished work with others under tight deadlines to get feedback
Are interested in direct, honest feedback and the messy process of building rather than cleanly structured theoretical lectures
For those unsure about whether the program fits their needs, they can attend our FREE on-going workshop sessions where you can expect to hear about:
how to refine your idea into a design
how to take a quick stab at implementing your idea (focus on no-code in this session - coding extensions will come in session 2 and 3)
how to use workshop sessions to earn bootcamp refunds via the Incentive Program!
anything else you want to know about agents, our bootcamp, etc
The next cohort runs April 28 – June 13, 2025.
What happens after the Bootcamp
Seven weeks cannot turn anyone into a complete expert in agent development – that’s just not realistic. However, as shown across multiple cohorts, the program effectively compresses the learning process into a well-scoped timeframe with clear deliverables.
What’s particularly interesting is that the impact of the bootcamp doesn’t end when the formal program concludes. Many participants continue developing their projects long after the bootcamp finishes and with friends they just met.