The L&D Function Has a Build Problem. AI Just Fixed It.
- Aparajita Sihag
- 4 days ago
- 7 min read
How I built and deployed multilingual training simulations in hours, with zero coding experience, zero budget, and zero vendor dependency.
Last week, I had never opened GitHub. I didn't know what a repository was. I had never written a line of code, never "deployed" anything, and the phrase "vibe coding" would have drawn a blank stare.
Today, I have four fully interactive, production-grade training simulations live on the internet, in three languages, accessible to anyone with a link. I built them in a single sitting using AI.
No developer. No vendor. No budget approval. No IT ticket.
This is not a story about me. This is a story about what just became possible for every L&D professional willing to try.
The build bottleneck nobody talks about
Here is a truth most L&D leaders know but rarely say out loud: the hardest part of our job is not designing learning experiences. It is getting them built.
We are a function full of people who can diagnose capability gaps, map competency frameworks, design experiential learning journeys, and articulate precisely what a frontline worker needs to know, feel, and do differently after a training intervention. We know our craft.
But the moment the design is done, we hit a wall. The wall is called implementation.
Want an interactive simulation instead of a slide deck? That requires a developer. Or a vendor. Or an authoring tool with a six-month procurement cycle. Want it in three languages for a workforce spread across states? Triple the timeline and the cost. Want to iterate quickly based on learner feedback? Good luck getting a change request through an outsourced development queue.
So what happens in practice? We compromise. The immersive simulation becomes a branching e-learning module. The multilingual rollout becomes "we'll do Hindi later." The rapid iteration becomes a v2 that never ships. And the frontline worker gets another PDF or another 45-minute mandatory LMS course that they click through without absorbing a thing.
This is not a capability problem. It is a dependency problem. The L&D function has outsourced its ability to build, and in doing so, it has outsourced its ability to move fast, experiment, and deliver learning at the speed the business actually needs.
What I built, and how
Let me be specific about what happened, because the specifics are where the argument lives.
I order from Blinkit at least once a day. Probably more than I should. And at some point, the L&D part of my brain started doing what it always does: watching the delivery partner and thinking about training. These are frontline workers operating under extreme time pressure, navigating unfamiliar neighbourhoods, and handling frustrated customers multiple times a day. The stakes of getting their training right are tangible. Every bad interaction is a lost customer, a bad rating, and potentially a deactivated partner.
I wanted to experiment. What would it look like if I tried to build a training simulation for this role, not as a project with a timeline and a vendor, but as a quick test of what is actually possible today?
If I had done this through a conventional L&D process, the sequence would have been: design the content, brief an instructional designer, engage a vendor for interactive development, go through review cycles, deploy through an LMS. Timeline: 6 to 12 weeks. Budget: significant. Languages: maybe one to start with.
Instead, I sat down with Claude and described what I wanted. A web-based simulation where the learner faces real scenarios. Stuck in traffic with a customer messaging angrily. Lost inside a gated society with GPS failing. Standing at a customer's door while they berate you about melted ice cream. I wanted it to feel like real life, not like a quiz. I wanted the learner to make choices under pressure and receive coaching when they chose wrong. Not a red "X" and a correct answer reveal, but genuine guidance on why the instinct was wrong and what the better instinct looks like.
What came back was a fully functional interactive simulation with a phone-screen mockup showing the delivery app, animated maps depicting the scenario, timed pressure indicators, and four response options per scenario, each with differentiated feedback and coaching tips. It looked and felt like a product, not a prototype.
Then I asked for it in Hindi. Then in Marathi. Then I asked for a landing page that ties all the simulations together.
Then I asked how to put it on the internet.
Claude walked me through creating a GitHub account, setting up a repository, enabling GitHub Pages, fixing filename conventions, and debugging 404 errors. The kind of errors a first-time user inevitably makes. Within an hour, the whole thing was live at a public URL that I could share with anyone.
What this actually means for L&D
I want to be careful here, because the temptation with any new technology is to overclaim. AI is not going to replace instructional designers or make L&D strategy unnecessary. Deep expertise in adult learning, behaviour change, and organisational development still matters enormously. Arguably more than ever, because the tool is only as good as the thinking that directs it.
But here is what has genuinely changed: the distance between having an idea and having a working product just collapsed from weeks to hours.
Think about what that unlocks.
Rapid prototyping becomes real. You can build a functional simulation, put it in front of five learners, watch what works and what doesn't, and iterate the same day. No wireframes, no mockups, no "imagine this is interactive." The prototype IS the product.
Multilingual delivery stops being a phase-two afterthought. If you can describe what you want in one language, you can have it in five. Not translated. Genuinely localised, with natural phrasing and cultural context. For a country like India, where your frontline workforce may speak Hindi in one state, Marathi in another, and Tamil in a third, this is transformative.
The feedback loop tightens dramatically. When building is cheap and fast, you can afford to experiment. You can create three versions of a scenario and A/B test which framing produces better learning outcomes. You can ship a rough version on Monday, gather data, and ship a better version on Wednesday. This is how product teams work. L&D has never been able to operate this way because the build cycle was too slow and too expensive.
The L&D professional becomes a full-stack operator. Not in the software engineering sense, but in the sense that matters: you can go from insight to intervention to deployment without a handoff. The person who understands the capability gap is the same person who builds the solution. That eliminates the signal loss that happens every time a design brief gets interpreted by someone who has never met the learner.
The uncomfortable implication
There is something uncomfortable in all of this, and I think it is worth naming directly.
If a single L&D professional can now build and deploy interactive, multilingual training simulations in hours, what exactly are we spending months and large budgets on?
I am not suggesting that everything can or should be built this way. Enterprise-scale learning ecosystems, LMS integrations, compliance frameworks, certification programmes: these have complexity that goes beyond what a single person with an AI tool can handle in an afternoon.
But a significant portion of what L&D teams produce (the scenario-based modules, the onboarding walkthroughs, the soft-skill simulations, the quick-reference job aids) can now be built at a fraction of the time and cost. And built better, because the person building it is the person who understands the learning need, not a developer two handoffs removed from the original insight.
This means the value proposition of L&D is shifting. If building is no longer the bottleneck, then being a good builder is no longer a differentiator. What differentiates is the quality of thinking that precedes the build: how well you diagnose the problem, how deeply you understand the learner, how cleverly you design the experience, and how rigorously you measure whether it worked.
The practitioners who thrive in this new landscape will be the ones who combine deep L&D expertise with the willingness to get their hands dirty with new tools. Not to become developers, but to become builders. People who can take an idea from napkin to production without waiting for permission or a purchase order.
What I learned about learning
There is an irony in all of this that I want to name: the experience of building these simulations was itself one of the most effective learning experiences I have had in recent memory.
I learned what a GitHub repository is by creating one. I learned what a 404 error means by getting one. I learned about file naming conventions for the web by breaking them and seeing what happened. I learned how deployment works by deploying something.
At no point did I complete a course, read a manual, or sit through an explainer video. I had a task I cared about, a tool that could help, and the willingness to work through errors. The AI did not just build the simulations for me. It coached me through every step, explained every error, and adjusted its guidance based on what I was actually seeing on my screen.
This is, incidentally, exactly the kind of learning experience we should be designing for our own learners. Contextual, task-embedded, error-driven, and immediately applicable. The delivery partner in my simulation learns by facing a scenario and making a choice. I learned by facing a deployment and making mistakes. The principle is the same.
A challenge to my fellow practitioners
If you are an L&D professional reading this, I have a challenge for you: build something this week.
Not a slide deck. Not a document. Something interactive, something a learner can experience, something that lives on the internet and can be accessed with a link.
You do not need to know how to code. You do not need a budget. You do not need permission from IT. You need a clear idea of what your learner needs to practice, and a willingness to sit with an AI tool and describe it in plain language.
The first time will feel unfamiliar. You will hit errors. File names will be wrong. Links will break. You will feel like you are out of your depth. That feeling is called learning, and we of all people should know how to sit with it.
The L&D function has spent decades telling the rest of the organisation to embrace change, build new capabilities, and step outside comfort zones. It is time we took our own advice.
The build bottleneck is gone. The question now is: what will you build?




Comments