I Made My Homepage Greet You in 26 Languages
AI-generated content may be inaccurate or misleading.
I have a 3090 sitting in a Kubernetes cluster doing basically nothing. So I made it say hello.
Now when you hit my homepage, instead of a static "Hi, I'm Mahdi," you get a greeting in a random language. French. Japanese. Swahili. Whatever the model feels like.
The idea came from Jane Wong's site, where she uses AI to generate dynamic content. I thought: I have the hardware. Why not?
The Setup
Ollama running in Kubernetes. Qwen 2.5 32B as the model. It's excellent at following instructions and handles multilingual content without drama. Most importantly, it supports structured JSON output. No parsing headaches.
The architecture is dead simple:
Browser → Ollama API → Qwen 2.5 → JSON response
No middleware. No serverless functions. No API keys. Don't abuse it.
The Code
Pick a random language. Ask the model to translate. Display the result.
const languages = [
'French',
'Spanish',
'German',
'Japanese',
'Korean',
'Arabic',
'Swahili',
'Thai',
// ... 26 total
];
const language =
languages[Math.floor(Math.random() * languages.length)];
const prompt =
`Translate "Hi, I am Mahdi" into ${language}. ` +
`Keep Mahdi in Latin letters. ` +
`Return JSON: {"greeting": "..."}`;
const response = await fetch(
'https://ollama.yourdomain.com/api/chat',
{
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
model: 'qwen2.5:32b',
messages: [{ role: 'user', content: prompt }],
stream: false,
format: 'json',
}),
}
);
const data = await response.json();
const { greeting } =
JSON.parse(data.message.content);That format: 'json' parameter is the whole trick. It constrains the model to valid JSON output. No wrestling with markdown. No chatty responses.
Ghost Integration
Ghost themes use Handlebars. I dropped the greeting markup into index.hbs with a loading spinner, inlined the JavaScript, and bundled the CSS separately.
If the API fails, it falls back to plain English. Silent failure. No one knows the AI took a nap.
What Does This Cost?
Electricity.
No per-token fees. No rate limits. No shipping visitor data to a third party. The 3090 pulls about 350W under load, but these requests finish in 1-2 seconds. Fractions of a cent per greeting.
The model needs ~20GB of VRAM. A 3090 has 24GB. You're set.
Was It Worth It?
Absolutely not.
A static greeting works fine. This is objectively unnecessary.
But there's something satisfying about refreshing the page and seeing "Hei, jeg er Mahdi" or "Xin chào, tôi là Mahdi." It's my website, running on my hardware, doing something slightly unexpected.
Sometimes that's reason enough.