AI-powered concierge console
Transforming concierge assistants with AI from researchers into experience curators
What if we built for output, not the process? 3.5 days became 2 minutes. Quality improved 4X and assistants were empowered to focus on curating experiences, not hunting for information.
At a glance
Client: Consumer family concierge startup – helping busy families fulfil tasks and plans via human assistants
Format: Extended AI Product Sprint (12 weeks)
My role: product & tech lead – strategy, UX, prototyping
Team: super focused squad of 3 (me, product designer, PM)
What shipped
Fully functional concierge console prototype with live demo, plus comprehensive technical POCs and design artefacts detailing implementation roadmap and next steps
The problem
Concierge assistants were spending ~3.5 days researching, vetting, and assembling proposals for customers – slow, expensive, and inconsistent under load. Free-form proposals were hard to review, get feedback on, and iterate on.
The key insight: the output is the proposal
The breakthrough wasn't "build an AI-powered search tool" – it was recognising that the core unit of work is a proposal artefact that can be structured.
We spent the first week truly understanding the problem and focusing on the why rather than how. We looked outside "AI search" and outside the brief to understand what assistants were actually producing and why it took so long.
Atomic proposals
Breaking down proposals into structured templates composed of reusable components sped up the research, improved quality of results and enabled better feedback loops. Together with rich family profiles the new console prototype was able to deliver highly personalised proposals that could easily be iterated on in minutes, not days.
What we designed and built
A fully functional console prototype that streamlined the entire todo fulfilment workflow. The system could simulate customer todos coming in, automatically trigger AI-powered research and proposal generation, then involve human assistants for curation and refinement.
A structured proposal data model (templates with atoms & molecules) to standardise output and enable rapid iteration
A comprehensive set of AI workflows and technical POCs demonstrating feasibility and capability of various different approaches (from search, to generation)
Human-in-the-loop review and curation steps (edit, approve, regenerate parts) using easy to use traditional UI instead of prompt engineering.
An interactive demonstration of a hybrid database of recommendations and generative AI built with capturing feedback in mind from the start
Timeline
During the initial discovery phase, we created a plan identifying the key building blocks of functionality and design work. We then executed this plan by running design and prototyping in parallel, with each work-stream informing the other.
Outcomes
Speed: Time to create a proposal: ~3.5 days → ~2 minutes
Quality: 4x estimated improvement due to consistency/structure
Operational impact: fewer human hours per proposal meant human assistants could focus on higher-value tasks (curation, customer interaction)
What I learned
Problem comes before the tech: Model the artefact or UX first – then apply AI. It is much easier to design around constraints once you know what the right thing to build is than it is to shoehorn AI into the wrong question.
Speed only matters if quality is preserved or improved. Not all research is equal – being more specific allows for better results. Spending slightly more time per request can be time-saving in the long run (no need to regenerate as much).
Reusable 'atoms' create compounding value across requests. Building a library of reusable components (atoms & molecules) allows for rapid assembly of new proposals, consistency in quality, and easier feedback capture – all of this compounds over time.
Interested in exploring how structured output design could transform your team's research workflows? I'd love to discuss what we learned and how similar approaches might apply to your context.