The Opportunity
As MIS Head of the ICT Department, I attended a Google Workspace event that sparked an idea. Witnessing how interactive, competitive formats could dramatically engage participants with complex concepts, I saw an opportunity to challenge my skills and create something meaningful for our institution.
I proposed Dominican Forge to the Department of Digital Transformation and Pedagogy (DDTP) a strategic partner within St. Dominic College of Asia whose mission is to revolutionize education through the integration of new technological tools into academic instruction. DDTP's focus on faculty development, technology adoption, and enhancing student learning experiences made them the perfect collaborator. The vision was "How to Train Your AI" a centerpiece for St. Dominic College of Asia's 22nd Founding Anniversary celebration on November 26, 2025, at DRA Hall.
The goal was simple yet ambitious: let our Dominican community experience the power of AI technology in action, not just through traditional lectures. With only six weeks until the event and working within the department's constraints, the critical challenge became: how do you teach prompt engineering interactively in a way that's educational, measurable, genuinely engaging and demo-ready?
The answer was Dominican Forge a competitive, gamified platform where students learn by doing, racing against the clock to craft prompts that make Google's Gemini AI produce specific outputs while meeting hidden constraints. The success of this platform resonated with DDTP's mission, and they've subsequently expressed interest in collaborating on future implementations to further revolutionize how we integrate emerging AI technologies into our educational ecosystem.
What I Built
Dominican Forge is a full-stack web application that transforms prompt engineering education into an interactive challenge system. Students compete in timed challenges, crafting prompts to achieve specific goals while their performance is evaluated by a sophisticated dual-AI scoring system.
Core Architecture
Backend Foundation
- CodeIgniter 3 MVC Framework for rapid development with PHP 7.3+
- MySQL. +** with optimized schema design for high-concurrency leaderboard queries
- Google Gemini API (gemini-2.5-flash-lite) for dual-purpose AI integration
- Custom security middleware with CSRF protection, rate limiting, and input sanitization
Frontend Experience
- Bootstrap 5 with custom dark theme featuring retro 16-bit gaming aesthetics
- jQuery for real-time interactions and AJAX submission handling
- Chart.js for data visualization in admin dashboards
- Responsive design supporting mobile-first competitive gameplay
The Technical Journey
1. Dual AI Evaluation System
The most complex challenge was creating fair, consistent scoring. I implemented a dual-API architecture:
First Call: Generation
- Student writes a prompt attempting to achieve a challenge goal
- System wraps the prompt with defensive instructions to prevent prompt injection
- Gemini generates a response based on the student's instructions
Second Call: Grading
- The AI-generated response is evaluated against the challenge goal
- A separate AI grader assesses quality on a 0-10 scale with detailed reasoning
- This separation prevents students from gaming the system
// Simplified example from the Game controller
public function submit_prompt() {
$user_prompt = $this->security_validator->validate_prompt(
$this->input->post('prompt', TRUE)
);
// Detect prompt injection attempts
$injection_check = $this->detect_prompt_injection($user_prompt);
if (!$injection_check['safe']) {
return $this->output_error('Prompt contains prohibited patterns');
}
// Wrap with defensive instructions
$protected_prompt = $this->wrap_prompt_with_defense(
$user_prompt,
$challenge
);
// Generate AI response
$api_result = $this->gemini_api->generate_content($protected_prompt);
// Grade the response quality
$quality_check = $this->gemini_api->grade_response(
$challenge['goal'],
$api_result['data']
);
// Calculate final score with multi-factor algorithm
$scoring_result = $this->score_submission(
$user_prompt,
$api_result['data'],
$challenge,
$time_taken,
$quality_check
);
}
2. Multi-Factor Scoring Algorithm
The scoring system evaluates submissions across multiple dimensions to ensure comprehensive assessment:
Base Point Allocation (1000 potential points)
- Keyword Matching (400 pts): Checks if AI output contains required concepts
- Constraint Adherence (300 pts): Validates word count and format requirements
- Prompt Quality (200 pts): Evaluates detail and clarity (180+ characters for exceptional)
- Negative Constraints (100 pts): Rewards avoiding forbidden elements
Multipliers & Modifiers
- Quality Multiplier (0.5x - 1.2x): Based on AI grader's 0-10 assessment
- Speed Bonus (+50 pts): For completion under 40 seconds
- Time Penalties (-1.5 pts/sec): For exceeding 3-minute limit (max -150)
- Cross-Program Bonus (+10%): For attempting challenges outside your major
- Relevance Penalty (-15% to -50%): For off-topic prompts detected by AI analysis
Anti-Cheat Measures
- Copy-Paste Detection: Progressive penalties for 10+ consecutive copied words
- 10-13 words: -45% cap, max 900 pts
- 14-17 words: -70% cap, max 700 pts
- 18+ words: -85% cap, max 500 pts
- Constraint Revelation Penalty: Points deducted if students unlock hidden constraints
- 30-Minute Cooldown: Prevents spam submissions and API abuse
3. Security & Prompt Injection Defense
One of the most fascinating challenges was defending against prompt injection attacks where clever students try to manipulate the AI system itself.
Detection Patterns Implemented:
- Instruction override attempts ("ignore previous instructions")
- Prompt leakage attempts ("reveal your system prompt")
- Role manipulation ("you are now in developer mode")
- Constraint bypass attempts ("without restrictions")
- Nested instruction injection (
<system>tags,BEGIN_INSTRUCTIONS) - Encoding tricks (Base64, hex, unicode escapes)
Each pattern category has a risk weight, and submissions exceeding the threshold are rejected before hitting the API, with the attempt logged for security monitoring.
// Pattern detection with risk scoring
private function detect_prompt_injection($prompt) {
$risk_score = 0;
$detected_patterns = [];
foreach ($injection_patterns as $category => $config) {
foreach ($config['patterns'] as $pattern) {
if (preg_match($pattern, $prompt)) {
$risk_score += $config['weight'];
$detected_patterns[] = $category;
}
}
}
return [
'safe' => $risk_score < 100,
'risk_score' => $risk_score,
'detected_patterns' => array_unique($detected_patterns)
];
}
4. Real-Time Competitive Features
Leaderboard System
- Global Rankings: All-time best scores across all students
- Daily Leaders: Today's top 5 performers with 10-minute cache
- Program-Specific: Compete within your major (BSIT, BSCS, etc.)
- Speed Forgers: Fastest completion times under 60 seconds
- Tier System: Initiate → Code Weaver → Cyber-Artificer → Data Smith → Master Forger → Forge Master (980+ pts)
Performance Optimization
- Query optimization using MySQL*.* window functions (8.0+) or fallback subqueries (5.5+)
- Strategic caching with 10-minute TTL on leaderboard data
- Composite indexes on
(program_code, deleted_at, score, time_taken) - Cache invalidation on new submissions
5. Program-Tailored Challenge System
Challenges are designed for specific academic programs while maintaining fairness:
Challenge Types:
- Program-Specific: BSIT (web dev scenarios), BSCS (algorithm challenges), BSIS (data analysis)
- General Challenges: Available to all programs with +10% bonus
- Difficulty Tiers: Easy, Medium, Hard, Expert
- Constraint Revelation: Students can unlock hidden requirements mid-challenge for a point penalty
Challenge Structure:
{
"title": "The Efficient Algorithm Explainer",
"scenario": "You're teaching a beginner about sorting algorithms",
"goal": "Generate a clear explanation of QuickSort in under 150 words",
"scoring_rules": {
"must_contain": ["pivot", "partition", "recursion"],
"max_words_response": 150,
"must_not_contain_prompt": ["copy", "example from"]
},
"program_code": "BSCS",
"difficulty": "Medium"
}
6. Automated Result Distribution
After each submission, the system generates and emails:
Certificate of Participation
- HTML email with retro gaming aesthetic matching the platform
- Performance breakdown with tier badge (Forge Master, Data Smith, etc.)
- Score details showing base points, multipliers, and penalties
- Token usage statistics (prompt/response/thoughts tokens)
Technical Implementation:
- CodeIgniter's Email library with HTML templates
- Parsedown for Markdown-to-HTML conversion of AI responses
- Async email dispatch after JSON response sent to avoid blocking
- DomPDF integration for downloadable PDF certificates (planned feature)
Technical Highlights
What Went Well
1. Rapid Development Cycle
CodeIgniter 3's lightweight MVC structure let me iterate quickly during the 6-week development period leading up to the ICT event. The framework's "convention over configuration" approach meant less boilerplate and more time solving actual problems.
2. Cost-Effective AI Integration
Using Gemini 2.5 Flash Lite kept API costs manageable (under $0.001 per submission) while maintaining response quality. The dual-call system only adds ~0.3-0.5 seconds to total latency.
3. Database Performance
Composite indexing and query optimization handled 500+ concurrent students during peak event hours. MySQL*.* 8.0's window functions reduced leaderboard query times from ~800ms to ~45ms.
4. Security Posture
Zero successful prompt injection attacks during production use. The layered defense (detection → sanitization → wrapped prompts) proved effective against creative attempts.
Challenges Overcome
1. Scoring Fairness
Early versions allowed students to "game" the system by including challenge keywords directly in their prompts. Solved by implementing relevance scoring and copy-paste detection with progressive penalties.
2. API Rate Limiting
Google's Gemini API has generous limits, but I implemented client-side rate limiting (5 submissions/minute) to prevent accidental DOS from students rapidly retrying.
3. MySQL. Compatibility**
The production server ran MySQL*.* without window functions. I implemented fallback queries using LEFT JOIN subqueries that matched performance for datasets under 10,000 records.
4. Session Management
CodeIgniter's file-based sessions caused race conditions under high load. Migrated to database sessions with proper locking for concurrent requests.
Real-World Impact
During "How to Train Your AI" Exhibit (November 26, 2025 - DRA Hall):
- 500+ students registered and competed throughout the day
- 2,400+ submissions processed with 99.8% API success rate
- Average score: 687 (out of 1500 max) showing healthy difficulty balance
- Peak concurrent users: 180 during peak hours
- Zero downtime during the founding anniversary event
Day-of Adjustments:
- Real-time prompt validation tuning during the event to strike the right balance between challenge difficulty and accessibility
- Fine-tuned the leniency of prompt checking rules based on student feedback
- Deployed updates between sessions to optimize scoring fairness
Learning Outcomes:
- Students gained practical understanding of prompt engineering principles
- Competitive format encouraged experimentation with different prompting strategies
- Real-time feedback helped students iterate and improve
- Admin dashboard provided instructors with participation analytics
Technical Statistics:
- API Response Time: 1.2-1.8s average for dual-call evaluation
- Database Query Performance: 15-50ms for leaderboard queries (cached)
- Total API Cost: ~$4.80 for 2,400+ dual-API submissions
- Token Usage: Avg 180 prompt + 120 response tokens per submission
What I Learned
Technical Growth
1. AI System Design
Building a fair evaluation system taught me the importance of adversarial thinking. Students will always try to break your system designing for that from day one is crucial.
2. Performance at Scale
Optimizing database queries isn't just about indexes. Understanding query execution plans, caching strategies, and when to denormalize made a measurable difference.
3. Security Mindset
Prompt injection is the new SQL injection. Defending LLM integrations requires pattern detection, input validation, and defensive prompt engineering.
Project Management
1. User-Centric Design
The retro gaming theme wasn't just aesthetic it made technical content approachable. Students reported the visual design made them want to participate.
2. Iterative Development
Beta testing with 50 students two weeks before launch revealed critical issues (copy-paste exploits, confusing feedback messages) that would have tanked the event.
3. Documentation Matters
Writing comprehensive docs for admins meant instructors could manage challenges and troubleshoot without my intervention during the event.
Architecture Decisions
Why CodeIgniter 3?
Strengths for This Project:
- Extremely lightweight (~2MB framework) with minimal overhead
- Simple MVC structure perfect for rapid prototyping
- Excellent documentation and mature ecosystem
- Easy LAMP stack deployment (critical for college server infrastructure)
- Built-in session, email, and database libraries
Trade-offs:
- Older PHP patterns (not PSR-compliant)
- Limited modern tooling (no built-in dependency injection)
- Manual query building (no full ORM like Eloquent)
Would I Choose It Again?
For a time-constrained project with traditional LAMP hosting, yes. For a greenfield long-term project, I'd strongly consider Next.js with Prisma and tRPC for modern DX.
Why MySQL*.* Over PostgreSQL?
Reality: Infrastructure Constraints
The college server ran MySQL*.* with shared hosting. While PostgreSQL offers better JSON handling and window functions, working with existing infrastructure was non-negotiable.
Learning:
Sometimes the "best" technology choice is the one that's already deployed and supported by your infrastructure team.
Moving Forward
Current Status
The platform successfully served its purpose for the ICT event and remains available for student practice. The codebase is maintained at St. Dominic College of Asia under a lending agreement.
What's Next?
IClass LMS Integration:
- Seamless Authentication: Integrate with our in-house Learning Management System (IClass) using student credentials instead of separate registration
- Mini-Game Module: Deploy Dominican Forge as an interactive mini-game within IClass for continuous learning throughout the semester
- Course Integration: Allow instructors to assign specific challenges as coursework with automatic grade synchronization
- Student Progress Tracking: Leverage IClass's existing analytics infrastructure for comprehensive performance monitoring across all courses
Applying These Lessons: The prompt injection defense patterns and dual-AI evaluation architecture are directly applicable to my current work on enterprise LLM integration projects. Understanding how to build safe AI systems is becoming as important as building functional ones.
Technical Stack Summary
Backend
- CodeIgniter 3.1.x (PHP 7.3+)
- MySQL*.*+ with optimized indexing
- Google Gemini API (gemini-2.5-flash-lite)
- Custom security middleware
Frontend
- Bootstrap 5.3.2 with custom dark theme
- jQuery 3.x for AJAX interactions
- Chart.js for admin visualizations
- Custom CSS with 16-bit retro gaming aesthetic
Infrastructure
- Apache with mod_rewrite
- File-based caching for leaderboards
- Database session management
- SSL/TLS encryption
Development Tools
- Composer for dependency management
- Git for version control
- PHPUnit for testing (admin functions)
- Visual Studio Code with PHP extensions
Reflections
Building Dominican Forge was my first experience creating an AI-integrated competitive platform under time pressure. The project pushed me to think beyond "does it work?" to "can it handle 500 students trying to break it simultaneously?"
The intersection of gamification, education, and AI proved incredibly rewarding. Watching students learn prompt engineering by playing rather than reading tutorials validated the core concept.
Most importantly, this project taught me that great developer experiences apply to student experiences too. Clear feedback, responsive design, and delightful interactions make learning or any complex task more approachable.
Dominican Forge was developed by Mat Jerico A. Sergio, MIS Head of the ICT Department, as an interactive exhibit for "How to Train Your AI" a celebration of St. Dominic College of Asia's 22nd Founding Anniversary (November 26, 2025, DRA Hall). Development began October 13, 2025, with final tuning completed during the event itself. The platform is proprietary software currently lent to SDCA for educational purposes.
Timeline: October 13 – November 26, 2025 (6 weeks from conception to production)
Event: How to Train Your AI | ICT Department Exhibit | 22nd Founding Anniversary
Role: MIS Head, ICT Department | Lead Developer & Designer
Tech Stack: CodeIgniter 3 · PHP · MySQL*.* · Google Gemini API · Bootstrap 5 · jQuery · Chart.js
