AI tools are part of how we work. They have been for a while now. The conversation around AI in development tends to split into two camps: people who think it changes everything and people who think it's overhyped. The view from inside a working studio is less dramatic and more practical than either side suggests.
We use AI to handle the repetitive production tasks that used to slow projects down. Code suggestions, boilerplate generation, debugging assistance, content drafts, quick prototyping. These tools save real time on work that's necessary but not where the value of a project lives.
We don't use AI for the decisions that determine whether a project succeeds or fails. Information architecture, CMS structure, design system decisions, integration logic, accessibility. These require understanding the business, the users, and how a project needs to evolve over time. AI tools can't evaluate those tradeoffs because they don't have the context.
The line between what AI handles and what we handle is clear. AI accelerates production. We make the decisions. The output is better and faster than either approach working alone.
Where AI earns its place
The most useful way to explain this is to be specific. Here's what AI actually does in our day-to-day work.
Code generation and suggestions
A significant portion of development work involves writing code that's structurally similar to code you've written before. Form validation patterns, responsive layout scaffolding, API call structures, component boilerplate. This is work that requires precision but not much original thinking. AI handles it well.
When we're building a set of components that share a common structure, AI generates the initial scaffold and we refine from there. A component that might take twenty minutes to write from scratch takes five with AI generating the starting point. Multiply that across a project with forty or fifty components and the time savings are real. The important qualifier is that everything AI generates gets reviewed and adjusted by an experienced developer. AI-generated code is a starting point. It becomes production-ready after a human evaluates it against the project's architecture, naming conventions, and quality standards.
Debugging and problem-solving
Describing a bug to an AI tool and getting potential causes back is often faster than searching through documentation or Stack Overflow threads. This is especially valuable for edge cases where the problem involves an interaction between multiple systems or libraries. The AI doesn't always get it right on the first attempt, but it narrows the search space. Instead of spending thirty minutes tracking down why a specific CSS behavior breaks in one browser, you get three likely causes in under a minute and can test each one.
This is particularly useful for solo work. In a larger team, a developer can turn to a colleague and talk through a problem. In a small studio, AI fills a similar role as a thinking partner for technical problems.
Content drafts and documentation
Getting from a blank page to a working draft is one of the most time-consuming parts of any project. AI handles the initial pass on placeholder copy, component descriptions, internal documentation, and technical handoff notes. The output isn't final. It needs editing for voice, accuracy, and alignment with the project's communication strategy. But it compresses what used to be a two-hour task into a thirty-minute editing session.
Technical documentation for codebases is another area where AI saves meaningful time. Documenting component libraries, API endpoints, and CMS field structures is important work that tends to get deprioritized when deadlines are tight. AI generates a solid first draft of documentation from the codebase itself, which makes the difference between a project that ships with documentation and one that ships without it.
Prototyping and exploration
When a project calls for evaluating multiple layout approaches or interaction patterns, AI can generate variations quickly. Instead of building three different approaches from scratch to compare, we can prototype them rapidly and make a more informed decision about which direction to pursue. The prototypes aren't production quality. They're thinking tools that help us evaluate options faster.
This also applies to animation and interaction exploration. Describing an interaction in natural language and getting a working prototype back in seconds is a faster way to evaluate whether an idea works than building it by hand.
Research and orientation
Getting up to speed on an unfamiliar API, library, or platform feature is faster with AI assistance. When a project requires integrating with a system we haven't worked with before, AI can explain the documentation, surface relevant examples, and help us evaluate different approaches before we write any production code. This isn't a replacement for reading the actual documentation. It's a way to build a mental model of the system faster so the documentation makes more sense when we do read it.
Where the thinking has to be ours
This section is more important than the one above it because this is where the value of experience lives. AI tools are getting better at production tasks every few months. They are not getting better at the judgment calls that determine whether a project serves the business well over time.
Information architecture
How a site is organized, how different audiences move through it, what content lives where, and how the navigation supports the business goals. This is strategy work that requires understanding the business, its competitive landscape, and the mental models of its users. AI can generate a sitemap. It can't evaluate whether that sitemap serves three different audience segments with competing priorities. It doesn't know that the sales team needs partner content separated from customer content, or that the investor audience needs to reach financial information within two clicks of the homepage. These decisions come from conversations with the client and experience with how similar businesses structure their digital presence.
CMS structure and content modeling
How content relates to itself, who edits what, what happens when the site doubles in size. A content model that works for ten pages and breaks at fifty is an expensive mistake. Getting this right requires experience with how real teams use CMS platforms over time. How does a marketing coordinator with limited technical confidence interact with the editing interface? What happens when three people need to update the site simultaneously? How does the content structure accommodate a new service line without requiring a developer to reconfigure the CMS?
AI can suggest a content model based on best practices. It can't evaluate whether that model will hold up when a specific team with specific workflows and specific growth plans starts using it every day.
Design system architecture
Component naming conventions, hierarchy, token structure, how the system extends when new patterns are needed. These are governance decisions that affect every future design and development choice. Getting the architecture wrong doesn't surface immediately. It shows up six months later when the team tries to extend the system and discovers that the existing patterns don't accommodate the new requirements without breaking something.
Design system architecture requires understanding not just the current project but how the system will need to evolve. AI can generate a component library. It can't make the judgment calls about how the components should relate to each other or how the system should handle patterns that haven't been designed yet.
Integration logic
The connection between a website form and a CRM is simple enough for AI to scaffold. The logic for what happens when the CRM is down, when a duplicate record needs to be handled, when the data doesn't match the expected format, or when the API rate limit is reached requires engineering judgment. These are the edge cases that determine whether an integration works reliably in production or breaks the first time conditions aren't ideal.
Integration logic also involves understanding the client's business processes. A form submission might need to trigger different workflows depending on the form type, the submitter's location, or the time of day. That routing logic comes from understanding the business, not from generating code.
Accessibility
Structural accessibility is architectural, not cosmetic. It's not about adding alt text to images after the site is built. It's about focus management, keyboard navigation patterns, screen reader flow, ARIA attributes that accurately describe dynamic content, and ensuring that the experience works for people using assistive technology.
AI-generated code can pass automated linting tools while still being inaccessible in practice. An automated scan might confirm that a modal has the correct ARIA role. It won't catch that the focus doesn't return to the trigger element when the modal closes, or that the tab order through the modal content doesn't follow a logical sequence. These are the kinds of accessibility issues that require testing with actual assistive technology and understanding of how people navigate the web without a mouse.
Why the distinction matters for clients
For the people hiring a studio, the relevant question isn't whether the studio uses AI. Most do at this point. The relevant question is what they use it for.
AI as an accelerator means better value. Faster timelines on production work, lower cost on tasks that don't require senior judgment, and more budget and attention available for the strategic work that determines whether the project actually serves the business. This is the scenario where clients benefit from AI adoption.
AI as a replacement for judgment means risk. Decisions made without business context, architecture that doesn't anticipate how the company grows, accessibility gaps that surface after launch, and integrations that work in testing but fail under real conditions. This is the scenario where AI adoption becomes a liability for the client, even if the initial delivery is faster and cheaper.
The question worth asking before hiring a studio is direct: are you using AI to deliver more thoughtful work faster, or to deliver less thoughtful work cheaper? The answer reveals whether AI is serving the client's interests or the studio's margins.
What this looks like in practice
A typical project moves through phases, and AI shows up differently in each one.
Discovery and strategy involves conversations with the client, competitive analysis, audience research, and information architecture planning. AI doesn't participate in this phase in any meaningful way. The output depends on understanding the specific business, its market, and its goals. That understanding comes from human conversation and experience.
Design is primarily human work. AI might generate initial layout variations for comparison or produce placeholder content so design decisions can be evaluated with realistic text. The design decisions themselves, the system architecture, the brand application, the responsive behavior, the interaction design, are made by experienced designers.
Development is where AI contributes most. Code scaffolding, component boilerplate, debugging, and documentation all benefit from AI acceleration. Every piece of AI-generated code is reviewed, tested, and refined by a developer who understands the project's architecture and quality standards.
Content gets a first draft from AI and a final version from a human. The voice, the accuracy, the strategic alignment with the brand and the audience all require human judgment. AI gets us to a working draft faster. It doesn't get us to a finished product.
Quality assurance and launch are human-led. Automated testing tools, some of which incorporate AI, supplement manual review. But the decisions about what's ready to ship, what needs more work, and what compromises are acceptable are ours.
Common questions
Do you use AI to build client websites?
We use AI tools in our development workflow to accelerate repetitive production tasks. Architecture decisions, design systems, and strategic work are done by experienced humans. AI accelerates the work. It doesn't replace the thinking behind it.
Does using AI make projects cheaper?
It can reduce time on production tasks, which creates efficiency. But the work that drives project cost is strategy, architecture, and design. Those require experienced judgment regardless of which tools are involved.
Which AI tools do you use?
The specific tools change as the space evolves. What stays consistent is how we use them: for production acceleration on code generation, debugging, and content drafts. Not for architectural or strategic decisions.
How do you ensure quality when AI is involved?
Everything AI generates gets reviewed by an experienced developer or designer before it enters a project. AI output is a starting point. The review, refinement, and quality decisions are human.
Should I be concerned if my developer uses AI?
Not if they're using it to accelerate production while maintaining experienced judgment on decisions. Be concerned if AI is being used to replace that judgment, especially on architecture, accessibility, and integration work.
Author:
Jeremy Bokor
Founder, Nifty Inc
