ASUS Internship Impact: Scaling AI Agents with MyCoder
During my Software Engineering Internship at ASUS (Open Cloud Infrastructure Software Center), I noticed a recurring bottleneck in our daily operations: Issue Triage.
Every week, our engineers (including myself) spent hours manually categorization tickets. It was repetitive, subjective, and distracting. I realized this was the perfect opportunity to leverage our internal AI tool, MyCoder, not just for writing code, but for automating workflows.
In this post, I’ll share how I designed an Automated Issue Classifier using Structured Prompting and GitLab MCP, a project I later shared in a center-wide technical session.
The Challenge: Operational Toil
Every engineering team faces the same bottleneck: Triaging Issues.
At scale, manually categorizing hundreds of tickets is not just time-consuming; it’s inconsistent. One developer might mark a bug as P1 - Critical, while another sees it as P2 - Major.
We identified three main pain points:
- Time Sink: 3-5 minutes per issue adds up to hours of lost productivity.
- Inconsistency: Subjective judgment varies between team members.
- Scalability: As the product grows, the backlog grows linearly, but the team size doesn’t.
The Solution: A Governed AI Agent
We didn’t just want a chatbot; we needed an Agent.
Using MyCoder (our internal VS Code extension based AI assistant), we implemented a solution that automates the entire lifecycle of issue triage.


1. The “Golden Rule” of Reliability
A common mistake in GenAI adoption is treating the LLM like a chat partner.
Wrong approach: “Hey AI, do you think this issue is urgent?”
To achieve enterprise-grade reliability, we introduced a “Golden Rule”:
Human Definition (SOP) + Structured Prompt (YAML) = Reliable Agent
Instead of vague instructions, we use Definition Files and Decision Trees embedded in YAML prompts. This forces the AI to follow strict business logic:
- Severity Rules: Defined explicitly (e.g., “If core function fails = Critical”).
- Frequency Analysis: AI extracts occurrence rates (Always, Sometimes, Rarely).
- Priority Matrix:
Severity x Frequency = Priority.

2. Bridging the Gap with MCP (Model Context Protocol)
The real magic happens when the AI can “act.”
We leveraged GitLab MCP to connect MyCoder directly to our DevOps infrastructure. This transforms the AI from a passive reader into an active participant.
The workflow looks like this:
- Read: MyCoder calls
gitlab_list_issuesto fetch the latest backlog. - Reason: The AI applies the structured prompt logic to classify the issue.
- Act: MyCoder calls
gitlab_update_issueto apply the correctPrioritylabel automatically.
No copy-pasting. No context switching. Just pure automation.

Key Takeaways: Governance is Key
The success of this implementation wasn’t just about the code; it was about Governance. By standardizing how we prompt and verify AI outputs, we achieved:
- 100% Consistency: The same input always yields the same priority.
- Auditability: Every decision is backed by a clear logic chain referenced in the prompt.
- Model Agility: Because we use MyCoder’s portal, we can switch between the latest models (e.g., Claude 3.5, GPT-4o) without changing our infrastructure or leaking API keys.
Conclusion
This “Issue Classifier” is just the beginning. By mastering Structured Prompting and MCP, we are laying the foundation for a new era of engineering where AI agents handle the toil, allowing us to focus on innovation.
If you are interested in the detailed breakdown of the prompt structure or the definition matrix, you can read the full presentation slides below in my LinkedIn.
Note: MyCoder is our internal AI coding assistant. The concepts shared here regarding MCP and Prompt Engineering are applicable to any enterprise-grade AI adoption strategy.
Unless otherwise noted, all posts are licensed under CC BY-SA 4.0. Please credit the source when sharing.