Hi guys,
I’ve been running Qwen locally for the last two weeks using an opencode + Ollama + qwen3.6:35b-A3B setup with q4_k_m quantization on a MacBook Pro M2 Max (32GB RAM).
My development workflow is heavily GitHub-centric: GitHub Issues and PRs are the source of truth and, in many cases, effectively act as the issue/PR state store.
To improve reliability and maintain consistently high code quality, I’ve been building a set of “skills” that act as boundary setters, orchestration layers, and quality gates.
Current skill set (still evolving):
Issue Lifecycle Skills
| Skill | Target Model | Description |
|---|---|---|
improve-issue |
opus |
Enriches a raw GitHub issue into a precise, implementation-ready specification persisted directly in the issue body. This is always the first step before coding. The generated spec includes explicit Acceptance Criteria (AC). |
evaluate-issue |
sonnet or qwen |
Sizes an enriched issue and recommends KEEP, COMPLETED, or SPLIT, including the recommended model tier for execution. |
orchestrate-issue |
sonnet or qwen; opus for complex tasks |
Runs the full implementation → review → correction loop for an enriched issue. When the issue is considered done, it automatically invokes pr-from-issue. |
pr-from-issue |
qwen or haiku |
Opens a PR from a completed issue, validates the “ready to be closed” marker, and executes the full test gate beforehand. |
The most interesting skill is probably orchestrate-issue, since it behaves as a controlled execution loop coordinating multiple sub-skills:
Implementation & Review Skills
| Skill | Target Model | Description |
|---|---|---|
code-issue |
sonnet or qwen; haiku for trivial tasks; opus for complex ones |
Implements every Acceptance Criterion from the enriched specification (or unresolved review gaps) in an Elixir/Phoenix/Ash codebase, including tests. |
review-issue |
sonnet or qwen; opus for complex reviews |
Performs a senior-reviewer pass over the implementation. Runs mix precommit and mix test, evaluates implementation coverage against the specification, and writes identified gaps back into the GitHub issue using a structured review marker block. |
After the PR is generated, I manually review it and add comments for anything that should be discussed, improved, or refactored. If further work is needed before merging, I invoke an additional skill:
PR Review Resolution
| Skill | Target Model | Description |
|---|---|---|
address-pr-review |
sonnet or qwen |
Resolves PR review comments, updates tests when necessary, and iterates until the review state is acceptable for merge. |
The whole system is still evolving, but at this point it’s already producing surprisingly high-quality code with a fairly reliable workflow.
Performance is actually very good locally, although I do need to keep background activity to a minimum — typically just one Chrome tab open, while Docker and PostgreSQL are always running.
Next step is to separate the testing generation from the code, I’ll probably use a agent swarm…
I’m curious whether others here are experimenting with similar AI-assisted development workflows around Elixir/Phoenix/Ash projects, especially with local-first setups.






















