AI in Engineering: Real-world Lessons from Sahi and Facets

Dale Vaz and Anshul Sao

AI in Engineering: Real-world Lessons from Sahi and Facets

Dale Vaz and Anshul Sao

The promise of AI in engineering is everywhere - from coding assistants to automated testing. But how are real engineering teams actually making it work? In a recent Elevation Capital AI Build Series session, engineering leaders Dale Vaz (Co-founder, Sahi) and Anshul Sao (Co-founder, Facets) shared their ground-level experiences implementing AI tools and transforming their development workflows.
We’re still in the early stages of a fundamental shift in how we build software. As Dale Vaz, co-founder of Sahi, put it: "This is like an Iron Man moment, where you get that Jarvis suit, and everybody can become an Iron Man." But wielding this power effectively requires careful thought and structured implementation.
The New Engineering Organization: Rethinking Team Structure
AI Marshals: A New Role Emerges
One of the most interesting takeaways came from Anshul Sao at Facets. They created a new role called "AI Marshals" - engineers whose job is to shadow different teams and identify and execute on AI automation opportunities.
"AI marshals" are team members whose role is to observe and shadow colleagues across different departments (like support or marketing) to identify repetitive or inefficient tasks that could be improved with AI agents. They are responsible for suggesting and sometimes helping to build these agents. The idea is that people often don't complain about inefficient processes they're used to, so having a neutral party (the AI marshal) helps uncover opportunities for automation or improvement.” At Facets, this is a part-time, rotating position usually filled by members of the tech team.
The Senior Engineer Evolution
Dale's approach at Sahi challenged conventional team structures. Instead of the traditional engineering pyramid, they built around a core of senior engineers (SDE4 profiles).
Dale explained that this allowed them to start with a strong foundation of expertise, which was essential for adopting an AI-first approach and scaling the company efficiently with a small team. This strategy also helped drive innovation and productivity, as these senior engineers could leverage AI tools effectively and mentor others as the team grew.
How Senior Engineers Drive AI Adoption
- Deeply understand system architecture
- Know how to effectively prompt AI tools
- Can evaluate AI-generated code critically
- Mentor junior engineers through pair programming
Tools & Implementation: A Practical Guide
The AI Tool Stack
In the modern engineering workflow, different tools are used to address specific needs and optimize various stages of the software development lifecycle. Here are some examples from the discussion:
- Coding and Code Generation: Tools like Copilot, Cursor, and Windsurf are used to assist engineers in writing code faster and more efficiently. These tools can act as peer programmers, help with boilerplate code, and suggest solutions.
- Code Review: Tools such as CodeRabbit are used for automated code review, catching common issues and providing a first layer of quality assurance before human review.
- Debugging and Infrastructure: Amazon Q and similar tools help with debugging infrastructure issues, providing targeted insights (e.g., identifying AWS capacity constraints) that would otherwise require manual investigation.
- CI/CD and Automation: AI tools are used to automate tasks like creating CI/CD pipelines, generating YAML scripts, and managing infrastructure as code (e.g., with Terraform templates).
- UI Prototyping and Internal Tools: Retool and custom GPTs are used to quickly build UI prototypes and internal dashboards, enabling non-engineering teams (like customer support) to leverage AI for their workflows.
The key takeaway is that modern engineering teams use a diverse set of tools, each serving a specific purpose, to maximize productivity, quality, and speed. The choice of tool depends on the task at hand, and teams benefit from experimenting and adapting as new solutions emerge.
Teams are also implementing multi-layer review processes by combining automated AI-based reviews with traditional human code reviews. After the code passes the AI review, it is then reviewed by a human engineer. Human reviewers focus on more complex, logical, or architectural issues that AI might miss, and ensure that the code aligns with broader project goals and context.
In some cases, teams use multiple AI models to review the same code, leveraging the strengths of different models and having them “compete” to find issues. This adds another layer of defence before human review.
The process is designed so that AI and human reviews complement each other. AI is particularly good at catching repetitive or well-defined issues, while humans excel at nuanced, context-dependent problems. Adding AI reviews is about increasing coverage and quality, not replacing human oversight. The goal is to add more checks, not to remove existing ones.
Culture & Adoption: Making It Work
Starting Small but Thinking Big
Rather than trying to roll out AI across the entire engineering team at once, it’s more effective to begin with small, focused projects that have a limited impact if something goes wrong (a "limited blast radius"). Examples include automating a specific internal library or addressing a common developer pain point, such as CI/CD pipeline creation.
By starting with these manageable projects, teams can quickly demonstrate value, build confidence, and learn from early experiences. Once these small wins are achieved, it becomes easier to scale AI adoption to larger, more critical parts of the organization. This approach helps manage risk, encourages buy-in, and lays the groundwork for broader transformation.
Key adoption strategies:
- Target common pain points
- Show immediate value
- Let engineers discover benefits organically
The Productivity Question
While exact metrics vary, teams reported significant improvements:
- 2-5x faster development cycles
- Operating with 20-30% of traditional team sizes
- Reduced debugging time
Dale reported that, compared to traditional teams, Sahi achieved at least a 5x reduction in required team size and much faster delivery timelines, attributing this to deep AI adoption.
While some teams attempt to track metrics like token consumption or lines of code, most agree that the productivity boost is often felt in the speed of shipping, reduced overhead, and improved team morale.
Productivity gains are not just in coding, but also in debugging, automation, and reducing communication overhead due to smaller, more efficient teams.
Security & Compliance: The Ongoing Challenge
There are still several open questions and unsolved problems, particularly around data protection when using third-party AI services and ensuring comprehensive compliance in rapidly changing environments.
Outstanding Concerns
- Exposure of Secrets: AI-generated code can sometimes include hardcoded secrets or credentials, increasing the risk of accidental exposure.
- Bypassing Design Patterns: Engineers may use AI to generate large blocks of code without adhering to established design patterns or best practices, making it harder to audit and secure the codebase.
- Data Privacy: There is uncertainty about what data is sent to large language models (LLMs), especially when using external APIs, and whether this data could be used for training or inadvertently leaked.
- Lack of Guardrails: Ensuring that AI-generated code complies with organizational security, privacy, and compliance requirements remains a challenge.
Current Approaches and Mitigations:
- Automated Scanning: Tools like GitHub Actions and other AI-based systems are used to scan code for secrets, credentials, and other security issues before code is merged or deployed.
- Human Code Review: Despite the use of AI, human review remains a critical layer to catch complex or context-specific security issues that automated tools might miss.
- Cautious Data Sharing: Teams are advised to avoid sending sensitive data to external LLMs and to use settings that prevent data from being used for model training when possible.
- Maintaining Best Practices: There is an emphasis on teaching AI to work within established design patterns and security frameworks, rather than allowing it to generate code that bypasses these controls.
- Ongoing Vigilance: Both Dale and Anshul acknowledge that these are evolving challenges, and that teams must remain vigilant and adapt as new risks and tools emerge.
Conclusion: Pragmatic Optimism
The key takeaway? AI in engineering isn't about wholesale replacement of existing practices - it's about thoughtful augmentation and acceleration. Success comes from:
- Starting small with clear wins
- Maintaining strong engineering principles
- Building teams that can effectively leverage AI
- Keeping security and quality at the forefront
Remember: Start small, focus on real friction points, and build from there. The goal isn't to use AI everywhere, but to use it where it meaningfully improves your engineering workflow while maintaining code quality and system reliability.
The truth likely lies in the middle. AI is neither the silver bullet that will solve all engineering challenges nor the threat that will make engineers obsolete. It's a powerful tool that, when combined with engineering expertise and organizational discipline, can deliver remarkable results.
Related

Harnessing Better Experience And Innovative GTM: Marketplaces Unleashed Part 3
Exploring the last two pillars of our marketplaces framework through case studies
03.08.2023

Vridhi: Reimagining Home Lending For Bharat's Self-Employed
Ram Naresh Sunku, Co-founder, Vridhi Home Finance
11.12.2024

Investing in Atlys
Building the world’s largest digital visa provider
21.09.2023