Creating an Artificial Intelligence Jira Documentation assistant for enterprise use on local hardware

Table of Contents
Brief/Challenge
Built a fine-tuned local LLM to auto-generate Jira issues with acceptance criteria from submitted stakeholder tickets, reducing requirements documentation time by ~60%.
structured Jira tickets with acceptance criteria, implementation notes, and test scenarios. This
Solution
Technical Architecture
- Why Granite? Enterprise-focused, Apache 2.0 licensed (commercially friendly), strong instruction-following capabilities for structured output
- Chose local deployment over cloud APIs for privacy, zero recurring costs, and data security
Training Pipeline:
- Hardware: Windows 11 Desktop (RTX 3090 GPU, 32 GB of RAM)
- Framework: Unsloth + LoRA (Low-Rank Adaptation) fine-tuning
- Dataset: 15 real (but anonymized) stakeholder tickets and corresponding Jira tickets from my product work
- Training time: 30-90 minutes per training run
Deployment:
- Platform: M1 MacBook Pro with 16GB of RAM (local inference)
- Deployment tool: Ollama (open-source LLM runtime)
- Model format: GGUF Q4 quantization (optimized for speed + memory efficiency)
- Interface: Python CLI
Outcome
I built a fine-tuned local LLM to auto-generate Jira issues with acceptance criteria from submitted stakeholder tickets, reducing requirements documentation time by ~60% within 5 days on hardware I already possess.
This project is a proof of concept and execution. Fifteen pairs of tickets (see the JSON example below) is admittedly a super small amount of training data. However, instead of spending the time to gather hundreds of pairs of tickets, I wanted to get started on the actual outcome to test the feasibility. Refinement and scale is easy compared to actually achieving accomplishing the mission.
{
"instruction": "Convert the following help desk ticket into a structured Jira ticket with What, Why, ACs, Implementation Notes, and Testing Notes sections.",
"input": "# [Help Desk Ticket] blah blah blah blah",
"output": "# [Jira Ticket] blah blah blah blah with Jira formatting"
}Why did I choose local deployment
This project demonstrates the full AI product lifecycle—from dataset creation and model fine-tuning to cross-platform deployment and real-world validation.
Most AI projects today are simply a wrapper for ChatGPT, Claude and/or Gemini but there are issues with that from an enterprise perspective. In addition to data privacy, recurring API costs, response time variabilities that most people would expect, there is one more reason: control.
The issue I have with these large, world changing models is that they are large and constantly changing. If I have a specific use case (convert a help desk ticket into a Jira issue in my format) based on training data that I provided, the model doesn't need to know the Metacritic review score for Wonder Man. That information isn't relevant to my use case. Along the same lines, as the large models change, you never know when the quality of responses might also change. Maybe ChatGPT works great today but after 4 more version updates, the responses start to drift and doesn't meet the specs of the original task. Of course nothing is 100% repeatable when dealing with AI but I prefer the option being able to adjust parameters.
Local development provides control and laser focus on the specified task at hand wile also providing customization, version control and the option to audit the entire pipeline. It's a more valuable skillset as well. Thats also the reason I chose Granite as the model to base my training on. It is an open large language model for the enterprise with a focus on performance. And IBM provides other tools and resources just as useful. For roughly a week's worth of work, I have a valuable tool that solves an actual business need while the building of it demsytified some aspects of AI to me.
That's called a win in my book.
Paris D Hunter
