A conversational assistant embedded in my portfolio that answers questions about my work, routes requests to a private backend, and serves responses from a Gemini-powered agent.
Visitors can ask natural-language questions about my projects, skills, and experience. The chat UI sends requests to a private Python backend deployed on Render, which calls Gemini and returns structured responses to the frontend.
Interactive assistant on a public portfolio site.
Visitor-facing AI feature
API key stays server-side.
Production-safe deployment
The portfolio frontend sends chat requests to a Render-hosted backend endpoint. The backend applies request formatting/guardrails, calls Gemini, and returns a clean response payload to the UI. A scheduled pinger hits the backend health endpoint periodically to reduce cold-start delays.
Portfolio UI
→ Render Backend (Python)
→ Gemini API
← Response JSON
(Health pinger keeps backend warm)
Backend controls model behavior.
Operational reliability basics.
UI stays simple; logic stays server-side.
Designed to stay cheap while job hunting.
Typical user prompts the chatbot handles:
- "What projects have you built on AWS?"
- "Explain your EGO optimizer in simple terms."
- "What is your experience with agents and automation?"
- "Can you summarize your strongest skills for an ML/AI role?"