My go-to AI assistants for coding, testing, and docs with concrete workflows.

AI has shifted from novelty to daily tooling. Over the last year, I standardized where and how I use AI so it’s reproducible across projects and teammates. Below are five tools and concrete workflows that consistently improve my throughput and quality. Each section includes prompts, code, or configs you can adapt immediately.
Inline suggestions help with boilerplate, API ergonomics, and tests. The trick is to scope prompts and accept small, reviewable diffs.
// src/lib/price.ts
export function priceWithTax(base: number, rate = 0.085) {
if (base < 0) throw new Error('NEGATIVE');
return Math.round((base * (1 + rate)) * 100) / 100;
}Ask your assistant to scaffold edge‑case tests; then refine:
// src/lib/price.test.ts
describe('priceWithTax', () => {
it('applies default rate', () => {
expect(priceWithTax(100)).toBe(108.5);
});
it('supports custom rate', () => {
expect(priceWithTax(100, 0.2)).toBe(120);
});
it('throws on negatives', () => {
expect(() => priceWithTax(-1)).toThrow('NEGATIVE');
});
});Practical prompt
Long‑context models are great for architecture discussions, naming, and risk analysis. Share the smallest set of files that captures the design seams.
You are a senior backend engineer. Review the following module boundaries for testability:
- routes/, handlers/, services/, repositories/
Identify seams for DI, suggest smaller functions with explicit inputs/outputs,
and outline a migration plan with checkpoints.AI can create consistent summaries and risk notes so reviewers focus on what matters.
// scripts/summarize-pr.ts
import OpenAI from 'openai';
const client = new OpenAI({ apiKey: process.env.OPENAI_API_KEY! });
export async function summarizePR(title: string, diff: string) {
const prompt = `Summarize this PR for reviewers. Title: ${title}\nDiff:\n${diff}\n` +
`Output headings: Summary, Risks, Tests to add, Manual checks.`;
const res = await client.chat.completions.create({
model: 'gpt-4o-mini',
messages: [{ role: 'user', content: prompt }]
});
return res.choices[0].message.content;
}GitHub Action step
# .github/workflows/pr-helper.yml
name: pr-helper
on:
pull_request:
types: [opened, synchronize]
jobs:
summarize:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: '20'
- run: npm ci
- name: Summarize PR
run: node scripts/summarize-pr.js "$PR_TITLE" "$(git diff origin/main...HEAD)"
env:
OPENAI_API_KEY: $ secrets.OPENAI_API_KEY Point your model at ADRs, runbooks, and API contracts to reduce hallucinations and speed onboarding.
import { ChatOpenAI } from '@langchain/openai';
import { MemoryVectorStore } from 'langchain/vectorstores/memory';
import { OpenAIEmbeddings } from '@langchain/openai';
const docs = [
'ADR-042 Use feature flags for risky changes',
'Runbook: payments service, env vars, alerts',
];
const store = await MemoryVectorStore.fromTexts(docs, [{ id: 'adr' }, { id: 'runbook' }], new OpenAIEmbeddings());
const retriever = store.asRetriever(3);
const llm = new ChatOpenAI({ modelName: 'gpt-4o-mini' });AI can enumerate edge cases you then lock in with property‑based tests.
import fc from 'fast-check';
import { normalizeEmail } from './normalize';
test('normalizeEmail is idempotent', () => {
fc.assert(fc.property(fc.emailAddress(), e => {
const once = normalizeEmail(e);
const twice = normalizeEmail(once);
return once === twice;
}));
});Prompt ideas
AI won’t replace engineering fundamentals. It amplifies them. Start by formalizing two or three workflows from above and measure for a month. If throughput rises and quality holds or improves, keep the practice and document it for the team. These habits compound into faster, clearer, and more reliable delivery.