LLM • BACKEND • DEPLOYMENT

AI Portfolio Chatbot

A conversational assistant embedded in my portfolio that answers questions about my work, routes requests to a private backend, and serves responses from a Gemini-powered agent.

What It Does

Visitors can ask natural-language questions about my projects, skills, and experience. The chat UI sends requests to a private Python backend deployed on Render, which calls Gemini and returns structured responses to the frontend.

Real-time Q&A

Interactive assistant on a public portfolio site.

Input:Natural language
Output:Concise explanations + links

Visitor-facing AI feature

Private Backend

API key stays server-side.

Runtime:Python service on Render
Security:Gemini key not exposed

Production-safe deployment

Architecture

The portfolio frontend sends chat requests to a Render-hosted backend endpoint. The backend applies request formatting/guardrails, calls Gemini, and returns a clean response payload to the UI. A scheduled pinger hits the backend health endpoint periodically to reduce cold-start delays.

Portfolio UI → Render Backend (Python) → Gemini API ← Response JSON (Health pinger keeps backend warm)

Key Features

Prompting + Guardrails

Backend controls model behavior.

Controls:system rules + formatting
Goal:useful, portfolio-relevant answers

Health Monitoring

Operational reliability basics.

Endpoint:/health
Pinger:scheduled uptime checks

Separation of Concerns

UI stays simple; logic stays server-side.

Frontend:chat UI + rendering
Backend:LLM calls + security

Cost Awareness

Designed to stay cheap while job hunting.

Key point:avoid always-on infra
Tradeoff:cold starts mitigated by pinger

Example Questions

Typical user prompts the chatbot handles:

- "What projects have you built on AWS?" - "Explain your EGO optimizer in simple terms." - "What is your experience with agents and automation?" - "Can you summarize your strongest skills for an ML/AI role?"

Links

Back to Projects
Backend Health Endpoint