Your Privacy

This site uses cookies to enhance your browsing experience and deliver personalized content. By continuing to use this site, you consent to our use of cookies.
COOKIE POLICY

Skip to main content

UDig AI Week: We Cleared the Calendar & Went All In

UDig AI Week: We Cleared the Calendar & Went All In
Back to insights

For one week, UDig’s entire delivery team — engineers, designers, and product owners — put client work down and focused on one thing: rebuilding how we work with AI. 

We didn’t plan AI Week because we had a perfect strategy. We planned it because we looked at where software delivery is heading — the disappearing middle, the compression of the build phase, clients asking how we’re leveraging AI on their projects — and decided we needed to stop talking about it and go all in. Together. In one week. 

The alternative was incremental. A session here, a lunch and learn there. We thought about that. And then we decided to rip the bandaid off. 

Why a full week?

Pulling an entire delivery team away from client work for a week is not a small decision. There’s a real cost. There were real hesitations. People come in at different levels with AI. Some were skeptical. Some were a little scared. And a fair question hung in the air: how does this affect our craft? 

That question actually got us to the right answer. AI doesn’t replace craft. What it does — when you use it right — is give you more room for it. More iteration in less time. A tighter feedback loop from start to finish. Handoffs that stop looking like handoffs and start looking like a continuous conversation between design and engineering. 

But you can’t learn that in an afternoon. You learn it when you have no other choice but to be in it. 

So we cleared the calendar. We built mixed teams — engineers, designers, product leaders — many of whom had never worked directly together before. Day one was learning, then forming and norming. By day two, things started rolling. 

What we built and what happened

token counterThe projects weren’t hypothetical. They were real pain points — associate-facing experiences, client-impacting ideas, things we’d talked about wanting to build for a while but never had the time to actually explore. The constraint of the week forced focus. The competition of the token counter up on the screen forced energy. It worked. 

The moment that stuck with us most: designers in Claude Code. In the terminal. Starting prototypes with code instead of jumping straight to Figma. That’s not a small thing. That’s a signal of how the relationship between design and engineering is going to work from here on out — more symbiotic, less over-the-wall. 

Not everything went perfectly. Claude went down for over an hour on day one. Token management got tight at points. We let people free-range without enough prompt guidance and some burned through tokens faster than others. We learned that next time, dedicated prompt training is worth the investment. 

But here’s what the major Claude outage actually produced: teams slowed down and planned. Asked better questions. Focused on where they were creating value before diving into build. That instinct — the discipline to think before you generate — turned out to be one of the most important things people walked away with. 

What we’re taking with us

On the way back from a team event, one of our practitioners — someone who came in more reluctant  — said she was feeling much better about it. That AI had come a long way from where it started. That’s the one. That’s why we did it. 

We’re also already measuring. Everyone did a self-assessment going in. We’ll do another one in June. And we’ve started building a scorecard for client engagements — not to push anything, but as a barometer for where clients are in their AI readiness and where we can actually help. 

The week ended. The work didn’t.

The week showed us what our teams are capable of.  

Because we built around real problems — not hypothetical ones — the work didn’t stop when the week did. Several of the concepts we prototyped are already making their way into client conversations as proof of concepts. That’s the compounding effect we were after. The week produced real work, and real work opens real doors. 

We’re doing it again.

The format will evolve as the tools do. This year was about getting to agentic. Next year, we’ll be building on what’s already in motion – deeper agentic workflows, sharper client integration, teams that barely remember what before looked like. The bar moves again. We’ll be ready. 

About Reid Braswell

With over 15 years in technology consulting, I've built my career at the intersection of engineering excellence and business impact. As UDig's Vice President of Engineering, I lead our software engineering practice and set the strategic direction for how we build, including driving UDig's shift toward AI-native delivery, where AI isn't just a tool we use but core to how our teams design and ship. My foundation as a practitioner informs everything: a bias toward quality, and a conviction that the teams who embrace this moment will define what great engineering looks like next.

About Josh Bartels

With over 20 years at the forefront of technology innovation, I've dedicated my career to delivering strategic solutions that drive business growth. As the Chief Technology Officer of UDig, I lead our technology vision, architecting solutions that transform how organizations leverage technology to generate impact.

Digging In

  • Artificial Intelligence

    Coding at 5x: How AI Boosted Our Team’s Productivity

    When I look back at my career in technology one day, I believe that 2025 will be the year I reflect upon as when everything changed. There was software engineering pre-2025 and software engineering post-2025. Late 2024 is when the AI-Powered IDE’s, specifically Windsurf and Cursor, came to market. Quickly followed by agentic coding tools […]

  • Artificial Intelligence

    Automating Discovery: Turning Requirements into Jira Stories with AI

    When UDig was asked to explore ways to accelerate delivery, the brief was intentionally open-ended, inviting the team to rethink existing processes and challenge assumptions. One area quickly emerged as a clear opportunity: discovery. While essential, discovery can slow momentum when large volumes of requirements must be manually translated into user stories. Like most projects, […]

  • Artificial Intelligence

    Generative BI: Building a Natural-Language Analytics Engine

    Our recent exploration into generative analytics uncovered exciting possibilities for the future of business intelligence. We set out with a broad goal: to democratize analytics insights and eliminate bottlenecks by giving users a personal data analyst. The result was GenBI, an internal proof of concept demonstrating how large language models can sit on top of structured datasets, translate natural language into SQL, and generate accurate charts in […]

  • Artificial Intelligence

    Agentic Commerce: Four Paths Retailers Can Take Right Now

    With over 40% of shoppers saying AI is now their primary source of insight, today’s agentic commerce tools create unprecedented visibility into consumer purchase intent and decision-making patterns. Today’s AI agents excel at surfacing clear product data and creating frictionless shopping experiences. Smart retailers already recognize agentic commerce as a differentiation opportunity, and some major […]

  • Artificial Intelligence

    From Experimentation to Enterprise: Making AI Adoption Real A Q&A with Josh Bartels, Chief Technology Officer

    Everyone’s talking about AI, but how do you actually move from buzz to business impact? We sat down with UDig CTO Josh Bartels to break down what it really takes to move beyond experimentation and build meaningful, scalable adoption across the enterprise. Q: How can organizations move beyond experimentation and start realizing real value with […]

  • Artificial Intelligence

    Paid Media Analyzer Prototype

    Built during UDig’s internal Airwave program, this prototype delivers automated Google Ads intelligence that pinpoints what’s working and what’s not, freeing teams from manual reporting and boosting ROI through faster, data-driven decisions.