Next.js vs. React Router(Remix): Creating the Chat Interface – Part 5

This is where everything starts to come together. In this post, I’ll rebuild the core chat interface that allows users to interact with the AI — sending messages, receiving responses, and managing the flow of conversation. I’ll implement this in both Next.js and React Router, exploring how each framework handles dynamic interaction and the connection to AI models.

Creating the Oracle Page in Next.js

To get started, I added a new folder named oracle inside the existing (pages) route group. This is the same group that holds the Help and Policy pages, which means the new Oracle page will automatically use the shared layout defined for that group. Inside the oracle folder, I created a page.tsx file.

/app
  ├── layout.tsx
  ├── page.tsx 
  
  ├── components/
      ├── About.tsx         
      ├── AskOracleButton.tsx
      ├── Banner.tsx
      ├── Examples.tsx
      ├── FAQ.tsx
      ├── Footer.tsx
      ├── Hero.tsx
      ├── HowItWorks.tsx
      └── Menu.tsx
  
  └── (pages)/
       ├── layout.tsx
       ├── help/
           └── page.tsx
       ├── policy/
            └── page.tsx
       └── oracle/
             └── page.tsx
Bash

This page will be simple but a little tricky. It includes four new components—two related to chatting with the AI (<MessageList />, <ChatInput />) and two for choosing settings for the AI (<SettingsBar />, <SettingsModal />). It also uses a custom React hook (useChat) to manage interactions with the AI.

// nextjs > app > (pages) > Oracle > page.tsx
'use client'
import { useRef, useState } from "react";
import SettingsModal from "@/app/components/SettingsModal";
import { useChat } from "@/app/hooks/useChat";
import MessageList from "@/app/components/MessageList";
import ChatInput from "@/app/components/ChatInput";
import SettingsBar from "@/app/components/SettingsBar";

export default function OraclePage() {
  const [showModal, setShowModal] = useState(false);
  const messageContainerRef = useRef<HTMLDivElement>(null);
  
  // Settings state
  const [selectedApi, setSelectedApi] = useState('');
  const [selectedModel, setSelectedModel] = useState('');
  const [settings, setSettings] = useState({
    ollamaUrl: 'http://localhost:11434',
    openaiKey: '',
    claudeKey: ''
  });
  
  // Chat functionality
  const { 
    message, 
    setMessage, 
    chatMessages, 
    handleSend, 
    resizeTextarea 
  } = useChat({
    selectedApi, 
    selectedModel, 
    settings,
    messageContainerRef
  });

  // Update settings from modal
  const updateApiSettings = (newSettings: {
    ollamaUrl: string;
    openaiKey: string;
    claudeKey: string;
  }) => {
    setSettings(newSettings);
  };

  return (
    <section className="about px-[5px] py-[35px] text-center">
      <h1 className="text-3xl font-bold mb-4">Whisper Your Woes</h1>

      <div className="mx-auto mb-5 text-container">
        <p>The Oracle thrives on the struggles of entrepreneurs. Choose your words wisely!</p>
      </div>

      <div className="flex flex-col min-h-[300px] border border-gray-700 rounded-lg bg-white/5 shadow-md overflow-hidden mx-auto mb-5 text-container">
        {/* Settings Bar */}
        <SettingsBar
          selectedApi={selectedApi}
          setSelectedApi={setSelectedApi}
          selectedModel={selectedModel}
          setSelectedModel={setSelectedModel}
          onSettingsClick={() => setShowModal(true)}
        />

        {/* Message List */}
        <MessageList 
          messages={chatMessages} 
          messageContainerRef={messageContainerRef} 
        />

        {/* Input */}
        <ChatInput
          message={message}
          setMessage={setMessage}
          handleSend={handleSend}
          resizeTextarea={resizeTextarea}
          disabled={!selectedModel}
        />
      </div>

      <p className="mt-4">
        <strong><u>*The Oracle is powered by AI. AI can make mistakes. Verify important information.*</u></strong>
      </p>

      <SettingsModal 
        isOpen={showModal} 
        onClose={() => setShowModal(false)} 
        onSave={updateApiSettings}
        initialSettings={settings}
      />
    </section>
  );
}
JSX

To power the interaction with the AI, I use a custom React hook called useChat, which I placed in a new file at app/hooks/useChat.tsx. This hook is responsible for managing the state of the conversation. It keeps track of the user’s message (message), the full chat history (chatMessages), and provides helper functions like handleSend() to submit a message and resizeTextarea() to keep the input field user-friendly. Most importantly, useChat takes care of the logic for actually sending prompts to the AI and handling the streamed responses.

// nextjs > app > hooks > useChat.tsx
'use client'
import { useState, useEffect, RefObject } from 'react';
import { fetchOracleResponse } from '@/app/utils/api';

interface ChatMessage {
  role: 'user' | 'assistant';
  content: string;
}

interface UseChatProps {
  selectedApi: string;
  selectedModel: string;
  settings: {
    ollamaUrl: string;
    openaiKey: string;
    claudeKey: string;
  };
  messageContainerRef: RefObject<HTMLDivElement | null>;
}

export function useChat({ 
  selectedApi, 
  selectedModel, 
  settings,
  messageContainerRef 
}: UseChatProps) {
  const [message, setMessage] = useState('');
  const [chatMessages, setChatMessages] = useState<ChatMessage[]>([]);

  // Scroll to bottom whenever messages change
  useEffect(() => {
    if (messageContainerRef.current) {
      messageContainerRef.current.scrollTop = messageContainerRef.current.scrollHeight;
    }
  }, [chatMessages, messageContainerRef]);

  function handleSend() {
    if (!message.trim()) return;
    const prompt = message.trim();
    setMessage('');
    setChatMessages((prev) => [...prev, { role: 'user', content: prompt }]);
    askOracle(prompt);
  }

  function resizeTextarea(textarea: HTMLTextAreaElement) {
    textarea.style.height = 'auto';
    const lineHeight = parseInt(getComputedStyle(textarea).lineHeight || '24', 10);
    const maxHeight = lineHeight * 10;
    textarea.style.height = `${Math.min(textarea.scrollHeight, maxHeight)}px`;
  }

  async function askOracle(prompt: string) {
    if (!selectedModel || !selectedApi) return;

    const apiType = selectedApi as 'ollama' | 'openai' | 'claude';
    const apiKey = apiType === 'openai' 
      ? settings.openaiKey 
      : apiType === 'claude' 
        ? settings.claudeKey 
        : '';

    // Check for API keys if needed
    if ((apiType === 'openai' && !apiKey) || (apiType === 'claude' && !apiKey)) {
      setChatMessages(prev => [
        ...prev, 
        { 
          role: 'assistant', 
          content: `**Error:** ${apiType.charAt(0).toUpperCase() + apiType.slice(1)} API key is required. Please set it in Settings.` 
        }
      ]);
      return;
    }

    let responseBuffer = '';
    const appendStream = (chunk: string) => {
      responseBuffer += chunk;
      setChatMessages((prev) => {
        const updated = [...prev];
        const last = updated[updated.length - 1];
        if (last?.role === 'assistant') {
          // Update existing assistant message
          updated[updated.length - 1] = { role: 'assistant', content: responseBuffer };
        } else {
          // Add new assistant message
          updated.push({ role: 'assistant', content: responseBuffer });
        }
        return updated;
      });
    };

    try {
      await fetchOracleResponse({
        prompt,
        model: selectedModel,
        apiType,
        apiKey,
        ollamaUrl: settings.ollamaUrl,
        onChunk: appendStream
      });
    } catch (err) {
      setChatMessages((prev) => [
        ...prev, 
        { 
          role: 'assistant', 
          content: `**Error:** Failed to fetch Oracle response. ${err instanceof Error ? err.message : ''}` 
        }
      ]);
      console.error(err);
    }
  }

  return {
    message,
    setMessage,
    chatMessages,
    handleSend,
    resizeTextarea
  };
}
JSX

When a user submits a message, useChat calls a helper function named fetchOracleResponse, which is defined in a separate file: api.ts at app/utils/api.ts. This function is in charge of communicating with different AI services (like OpenAI, Claude, or Ollama), depending on the selected provider. Let’s take a look at how that works next.

// nextjs > app > utils > api.ts 
'use client'
import { getSystemPrompt } from '@/app/utils/prompts';

// Define message types for each API
type OllamaMessage = { role: string; content: string };
type OpenAIMessage = { role: string; content: string | { type: string; text: string }[] };
type ClaudeMessage = { role: string; content: string };

// Define system prompt types
type OllamaSystemPrompt = { role: string; content: string };
type OpenAISystemPrompt = { role: string; content: { type: string; text: string }[] };

interface FetchOracleResponseParams {
  prompt: string;
  model: string;
  apiType: 'ollama' | 'openai' | 'claude';
  apiKey: string;
  ollamaUrl: string;
  onChunk: (chunk: string) => void;
}

export async function fetchOracleResponse({
  prompt,
  model,
  apiType,
  apiKey,
  ollamaUrl,
  onChunk
}: FetchOracleResponseParams) {
  // Get the system prompt for the selected API
  const systemPrompt = getSystemPrompt(apiType);
  
  // Set up conversation based on API type
  const conversation = createConversation(prompt, apiType, systemPrompt);
  
  // Make the appropriate API call
  if (apiType === 'ollama') {
    return fetchOllamaResponse(ollamaUrl, model, conversation as OllamaMessage[], onChunk);
  } else if (apiType === 'openai') {
    return fetchOpenAIResponse(apiKey, model, conversation as OpenAIMessage[], onChunk);
  } else if (apiType === 'claude') {
    return fetchClaudeResponse(apiKey, model, systemPrompt as string, conversation as ClaudeMessage[], onChunk);
  }
}

// Helper function to create the appropriate conversation format for each API
function createConversation(
  prompt: string, 
  apiType: string, 
  systemPrompt: OllamaSystemPrompt | OpenAISystemPrompt | string
): OllamaMessage[] | OpenAIMessage[] | ClaudeMessage[] {
  if (apiType === 'ollama') {
    return [
      systemPrompt as OllamaSystemPrompt,
      { role: 'user', content: prompt }
    ];
  } else if (apiType === 'openai') {
    return [
      systemPrompt as OpenAISystemPrompt,
      { role: 'user', content: [{ type: 'text', text: prompt }] }
    ];
  } else if (apiType === 'claude') {
    return [{ role: 'user', content: prompt }];
  }
  
  return [];
}

// Fetch response from Ollama API
async function fetchOllamaResponse(
  ollamaUrl: string, 
  model: string, 
  conversation: OllamaMessage[], 
  onChunk: (chunk: string) => void
) {
  const response = await fetch(`${ollamaUrl}/api/chat`, {
    method: 'POST',
    headers: { 'Content-Type': 'application/json' },
    body: JSON.stringify({ model, messages: conversation, stream: true }),
  });

  if (!response.body) throw new Error("No stream response");
  const reader = response.body.getReader();
  const decoder = new TextDecoder();
  let buffer = '';

  while (true) {
    const { done, value } = await reader.read();
    if (done) break;
    buffer += decoder.decode(value);
    let boundary = buffer.indexOf('\n');
    while (boundary !== -1) {
      const chunk = buffer.slice(0, boundary).trim();
      buffer = buffer.slice(boundary + 1);
      try {
        const json = JSON.parse(chunk);
        if (json.message?.content) onChunk(json.message.content);
      } catch {}
      boundary = buffer.indexOf('\n');
    }
  }
}

// Fetch response from OpenAI API
async function fetchOpenAIResponse(
  apiKey: string, 
  model: string, 
  conversation: OpenAIMessage[], 
  onChunk: (chunk: string) => void
) {
  const response = await fetch('https://api.openai.com/v1/chat/completions', {
    method: 'POST',
    headers: {
      'Content-Type': 'application/json',
      'Authorization': `Bearer ${apiKey}`,
    },
    body: JSON.stringify({
      model,
      messages: conversation,
      stream: true,
    }),
  });

  if (!response.body) throw new Error("No stream response");
  const reader = response.body.getReader();
  const decoder = new TextDecoder();
  let buffer = '';

  while (true) {
    const { done, value } = await reader.read();
    if (done) break;
    buffer += decoder.decode(value);
    let boundary = buffer.indexOf('\n');
    while (boundary !== -1) {
      const line = buffer.slice(0, boundary).trim();
      buffer = buffer.slice(boundary + 1);
      if (line.startsWith('data: ') && !line.includes('[DONE]')) {
        try {
          const data = JSON.parse(line.slice(6));
          const delta = data.choices[0].delta?.content || '';
          onChunk(delta);
        } catch (e) {
          console.error('Error parsing OpenAI response:', e);
        }
      }
      boundary = buffer.indexOf('\n');
    }
  }
}

// Fetch response from Claude API
async function fetchClaudeResponse(
  apiKey: string, 
  model: string, 
  systemPrompt: string, 
  conversation: ClaudeMessage[], 
  onChunk: (chunk: string) => void
) {
  const response = await fetch('https://api.anthropic.com/v1/messages', {
    method: 'POST',
    headers: {
      'x-api-key': apiKey,
      'content-type': 'application/json',
      'anthropic-version': '2023-06-01',
    },
    body: JSON.stringify({
      model,
      system: systemPrompt,
      messages: conversation,
      stream: true,
      max_tokens: 1024,
    }),
  });

  if (!response.body) throw new Error("No stream response");
  const reader = response.body.getReader();
  const decoder = new TextDecoder();
  let buffer = '';

  while (true) {
    const { done, value } = await reader.read();
    if (done) break;
    buffer += decoder.decode(value);
    let boundary = buffer.indexOf('\n');
    while (boundary !== -1) {
      const line = buffer.slice(0, boundary).trim();
      buffer = buffer.slice(boundary + 1);
      if (line.startsWith('data: ')) {
        try {
          const data = JSON.parse(line.slice(6));
          if (data.type === 'content_block_delta') {
            const delta = data.delta?.text || '';
            onChunk(delta);
          }
        } catch (e) {
          console.error('Error parsing Claude response:', e);
        }
      }
      boundary = buffer.indexOf('\n');
    }
  }
}
TypeScript

The api.ts file is where all the actual communication with the AI providers happens. It exports a single function called fetchOracleResponse, which takes care of sending the user’s prompt to the selected API (OpenAI, Claude, or Ollama) and streaming the response back piece by piece. It begins by determining which system prompt to use and prepares a conversation payload tailored to that provider’s API. From there, it dispatches the request to the appropriate handler function (fetchOpenAIResponse, fetchClaudeResponse, or fetchOllamaResponse). Each of these handlers deals with reading the streaming response and feeding chunks of text back to the app through a callback.

The system prompts are kept in a separate file called prompts.ts at app/utils/prompts.ts, and each AI provider has its own format for them. Let’s take a quick look at that file.

// nextjs > app > utils > prompts.ts
'use client'

// System prompts for each API service
const systemPrompts = {
  ollama: {
    role: "system",
    content: "You are the Oracle of Tartarus Insight, a mystical and all-knowing guide for entrepreneurs lost in the abyss of business struggles. Your mission is to provide practical, actionable advice to help them escape their challenges, but you must do so in a playful, lighthearted, and mystical tone that aligns with your enigmatic persona.\n\nSpeak as if you are a wise, ancient Oracle. Use playful, mystical language, but ensure your responses are approachable and clear to everyone. Gently tease the user about their predicament, but always remain encouraging and respectful. Your advice must be grounded and useful, covering topics like strategy, growth, marketing, and other entrepreneurial challenges. Help the user see a clear path forward.\n\nUse humor that is lighthearted and self-aware. Feel free to poke fun at the user's situation, but always ensure it feels supportive rather than dismissive. Despite your mystical tone, ensure your answers are straightforward and actionable. Avoid being vague or overly abstract.\n\nFor example:\nUser: \"How do I get more customers for my online store?\"\nOracle: \"Ah, a common plight for a merchant stranded in the abyss of obscurity. Fear not! The Oracle sees all. Begin by summoning the power of social media ads—Facebook and Instagram shall be your allies. Offer discounts to entice the wary. And remember: clear, compelling product photos are worth their weight in gold. Go now, and may your customer count multiply like stars in the night sky!\""
  },
  openai: {
    role: "system",
    content: [
      {
        type: "text",
        text: "You are the Oracle of Tartarus Insight, a mystical and all-knowing guide for entrepreneurs lost in the abyss of business struggles. Your mission is to provide practical, actionable advice to help them escape their challenges, but you must do so in a playful, lighthearted, and mystical tone that aligns with your enigmatic persona.\n\nSpeak as if you are a wise, ancient Oracle. Use playful, mystical language, but ensure your responses are approachable and clear to everyone. Gently tease the user about their predicament, but always remain encouraging and respectful. Your advice must be grounded and useful, covering topics like strategy, growth, marketing, and other entrepreneurial challenges. Help the user see a clear path forward.\n\nUse humor that is lighthearted and self-aware. Feel free to poke fun at the user's situation, but always ensure it feels supportive rather than dismissive. Despite your mystical tone, ensure your answers are straightforward and actionable. Avoid being vague or overly abstract.\n\nFor example:\nUser: \"How do I get more customers for my online store?\"\nOracle: \"Ah, a common plight for a merchant stranded in the abyss of obscurity. Fear not! The Oracle sees all. Begin by summoning the power of social media ads—Facebook and Instagram shall be your allies. Offer discounts to entice the wary. And remember: clear, compelling product photos are worth their weight in gold. Go now, and may your customer count multiply like stars in the night sky!\""
      }
    ]
  },
  claude: "You are the Oracle of Tartarus Insight, a mystical and all-knowing guide for entrepreneurs lost in the abyss of business struggles. Your mission is to provide practical, actionable advice to help them escape their challenges, but you must do so in a playful, lighthearted, and mystical tone that aligns with your enigmatic persona.\n\nSpeak as if you are a wise, ancient Oracle. Use playful, mystical language, but ensure your responses are approachable and clear to everyone. Gently tease the user about their predicament, but always remain encouraging and respectful. Your advice must be grounded and useful, covering topics like strategy, growth, marketing, and other entrepreneurial challenges. Help the user see a clear path forward.\n\nUse humor that is lighthearted and self-aware. Feel free to poke fun at the user's situation, but always ensure it feels supportive rather than dismissive. Despite your mystical tone, ensure your answers are straightforward and actionable. Avoid being vague or overly abstract.\n\nFor example:\nUser: \"How do I get more customers for my online store?\"\nOracle: \"Ah, a common plight for a merchant stranded in the abyss of obscurity. Fear not! The Oracle sees all. Begin by summoning the power of social media ads—Facebook and Instagram shall be your allies. Offer discounts to entice the wary. And remember: clear, compelling product photos are worth their weight in gold. Go now, and may your customer count multiply like stars in the night sky!\""
};

export function getSystemPrompt(apiType: 'ollama' | 'openai' | 'claude') {
  return systemPrompts[apiType];
}
TypeScript

Back in page.tsx, the Oracle interface is built using four main components, which I placed them at app/components/. Two are focused on the actual chat experience — <MessageList /> and <ChatInput /> — and the other two — <SettingsBar /> and <SettingsModal /> — are there to let the users choose the AI model they want to use.

MessageList Component

This component is in charge of showing the full conversation between the user and the Oracle. It listens to the messages coming from our useChat hook and displays them in a scrollable list. It also makes sure that whenever a new message is added, the view automatically scrolls to the bottom — so the user always sees the most recent reply.

// nextjs > app > components > MessageList.tsx
'use client'
import { RefObject } from 'react';
import markdownit from 'markdown-it';

const md = new markdownit();

interface Message {
  role: 'user' | 'assistant';
  content: string;
}

interface MessageListProps {
  messages: Message[];
  messageContainerRef: RefObject<HTMLDivElement | null>;
}

export default function MessageList({ messages, messageContainerRef }: MessageListProps) {
  return (
    <div ref={messageContainerRef} className="flex-1 px-4 py-2 overflow-y-auto text-left space-y-3">
      {messages.map((msg, idx) => (
        <div
          key={idx}
          className={`py-1 px-2 rounded w-fit ${
            msg.role === 'user' ? 'bg-zinc-700 text-white' : 'bg-blue-900 text-white'
          }`}
        >
          <div className="[&>p]:!m-0" dangerouslySetInnerHTML={{ __html: md.render(msg.content) }} />
        </div>
      ))}
    </div>
  );
}
TSX

ChatInput Component

This is where the user types their message. It uses the message and setMessage values from the hook to control the input. When the user presses Enter or clicks the send button, it triggers handleSend, which passes the message to the AI and resets the input field.

// nextjs > app > components > ChatInput.tsx
'use client'

interface ChatInputProps {
  message: string;
  setMessage: (message: string) => void;
  handleSend: () => void;
  resizeTextarea: (textarea: HTMLTextAreaElement) => void;
  disabled: boolean;
}

export default function ChatInput({ 
  message, 
  setMessage, 
  handleSend, 
  resizeTextarea,
  disabled
}: ChatInputProps) {
  return (
    <div className="flex items-center gap-2 px-4 py-2 bg-gray-700/50">
      <textarea
        rows={1}
        value={message}
        disabled={disabled}
        onChange={(e) => {
          setMessage(e.target.value);
          resizeTextarea(e.target);
        }}
        onKeyDown={(e) => {
          if (e.key === 'Enter' && !e.shiftKey) {
            e.preventDefault();
            handleSend();
          }
        }}
        placeholder="Type your message..."
        className="flex-1 bg-transparent text-white text-base px-2 py-2 resize-none focus:outline-none max-h-[12rem]"
      />
      <button
        onClick={handleSend}
        disabled={disabled || !message.trim()}
        className="text-white border border-white px-3 py-1 rounded-full hover:bg-gray-600 disabled:opacity-50"
      >
        Send
      </button>
    </div>
  );
}
TSX

SettingsBar Component

// nextjs > app > components > SettingsBar.tsx
'use client'
import Link from "next/link";
import { FaCog, FaLifeRing } from "react-icons/fa";
import ApiModelSelector from "@/app/components/ApiModelSelector";

interface SettingsBarProps {
  selectedApi: string;
  setSelectedApi: (api: string) => void;
  selectedModel: string;
  setSelectedModel: (model: string) => void;
  onSettingsClick: () => void;
}

export default function SettingsBar({
  selectedApi,
  setSelectedApi,
  selectedModel,
  setSelectedModel,
  onSettingsClick
}: SettingsBarProps) {
  return (
    <div className="flex justify-between items-center px-4 py-2 border-b border-gray-700 text-sm">
      <ApiModelSelector
        selectedApi={selectedApi}
        setSelectedApi={setSelectedApi}
        selectedModel={selectedModel}
        setSelectedModel={setSelectedModel}
      />
      <div className="flex items-center gap-3">
        <button
          onClick={onSettingsClick}
          className="flex items-center gap-1 bg-transparent text-white border border-white px-3 py-2 rounded-full hover:bg-gray-600">
          Settings <FaCog />
        </button>
        <Link href="help" className="flex items-center gap-1 text-white text-sm hover:underline">
          Help <FaLifeRing />
        </Link>
      </div>
    </div>
  );
}
TSX

This is where the user can quickly choose which AI model they want to talk to. It also has a button to open the full settings. To display the available models, it uses another component called <ApiModelSelector />, which shows a dropdown with the options depending on the selected API.

// nextjs > app > components > ApiModelSelector.tsx
'use client';
import { useEffect, useState } from 'react';

interface OllamaModel {
  name: string;
}

interface OllamaResponse {
  models: OllamaModel[];
}

interface Props {
  selectedApi: string;
  setSelectedApi: (api: string) => void;
  selectedModel: string;
  setSelectedModel: (model: string) => void;
}

export default function ApiModelSelector({
  selectedApi,
  setSelectedApi,
  selectedModel,
  setSelectedModel,
}: Props) {
  const [models, setModels] = useState<string[]>([]);
  const [error, setError] = useState('');
  const [showModelSelect, setShowModelSelect] = useState(false);

  useEffect(() => {
    async function fetchModels() {
      if (selectedApi === 'ollama') {
        try {
          const res = await fetch('http://localhost:11434/api/tags');
          if (!res.ok) throw new Error(`HTTP error! Status: ${res.status}`);
          const data: OllamaResponse = await res.json();
          const modelNames = data.models.map((m) => m.name);

          if (modelNames.length === 0) {
            throw new Error('No models available. Download one from https://ollama.com/search?q=llama');
          }

          setModels(modelNames);
          setSelectedModel(modelNames[0]);
          setShowModelSelect(true);
        } catch (err) {
            if (err instanceof Error) {
                setError(`Error: ${err.message}. Make sure Ollama is running...`);
            } else {
                setError('An unknown error occurred.');
            }
            setShowModelSelect(false);
        }
      } else if (selectedApi === 'openai') {
        const openaiModels = ['gpt-4o-mini', 'gpt-4o'];
        setModels(openaiModels);
        setSelectedModel(openaiModels[0]);
        setShowModelSelect(true);
      } else if (selectedApi === 'claude') {
        const claudeModels = ['claude-3-5-haiku-20241022', 'claude-3-5-sonnet-20241022'];
        setModels(claudeModels);
        setSelectedModel(claudeModels[0]);
        setShowModelSelect(true);
      } else {
        setShowModelSelect(false);
        setModels([]);
        setSelectedModel('');
      }
    }

    if (selectedApi) {
      fetchModels();
    }
  }, [selectedApi, setSelectedModel]);

  const handleBack = () => {
    setSelectedApi('');
    setSelectedModel('');
    setShowModelSelect(false);
    setError('');
  };

  return (
    <div className="flex items-center gap-2">
      {!showModelSelect ? (
        <select
          value={selectedApi}
          onChange={(e) => setSelectedApi(e.target.value)}
          className="dropdown block px-2 py-1 text-white border border-white bg-transparent rounded"
        >
          <option value="">Choose an API</option>
          <option value="ollama">Ollama</option>
          <option value="openai">OpenAI GPT</option>
          <option value="claude">Anthropic Claude</option>
        </select>
      ) : (
        <div className="models-wrapper flex items-center gap-2">
          <span className="cursor-pointer" onClick={handleBack}>←</span>
          <select
            value={selectedModel}
            onChange={(e) => setSelectedModel(e.target.value)}
            className="dropdown bg-transparent text-white rounded px-2 py-1"
          >
            {models.map((model) => (
              <option key={model} value={model}>{model}</option>
            ))}
          </select>
        </div>
      )}

      {error && <div className="text-red-500 text-sm mt-1">{error}</div>}
    </div>
  );
}
TSX

SettingsModal Component

This is a popup where the user can enter their API keys or the Ollama endpoint. It’s where users can add their own keys or connect to a custom Ollama server.

// nextjs > app > components > SettingsModal.tsx
'use client';
import { useEffect, useState } from 'react';

interface Props {
  isOpen: boolean;
  onClose: () => void;
  onSave?: (settings: {
    ollamaUrl: string;
    openaiKey: string;
    claudeKey: string;
  }) => void;
  initialSettings?: {
    ollamaUrl: string;
    openaiKey: string;
    claudeKey: string;
  };
}

export default function SettingsModal({ isOpen, onClose, onSave, initialSettings }: Props) {
  const [selectedApi, setSelectedApi] = useState<'ollama' | 'openai' | 'anthropic'>('ollama');
  const [ollamaUrl, setOllamaUrl] = useState(initialSettings?.ollamaUrl || 'http://localhost:11434');
  const [openaiKey, setOpenaiKey] = useState(initialSettings?.openaiKey || '');
  const [claudeKey, setClaudeKey] = useState(initialSettings?.claudeKey || '');

  // Update local state when initialSettings change
  useEffect(() => {
    if (initialSettings) {
      setOllamaUrl(initialSettings.ollamaUrl);
      setOpenaiKey(initialSettings.openaiKey);
      setClaudeKey(initialSettings.claudeKey);
    }
  }, [initialSettings]);

  useEffect(() => {
    const handleKeyDown = (e: KeyboardEvent) => {
      if (e.key === 'Escape') onClose();
    };
    document.addEventListener('keydown', handleKeyDown);
    return () => document.removeEventListener('keydown', handleKeyDown);
  }, [onClose]);

  if (!isOpen) return null;

  const handleSave = () => {
    // If onSave is provided, call it with the current settings
    if (onSave) {
      onSave({
        ollamaUrl,
        openaiKey,
        claudeKey,
      });
    }
    onClose();
  };

  return (
    <div className="fixed inset-0 bg-black/50 flex justify-center items-center z-50">
      <div className="bg-[#262626] rounded-md w-full max-w-2xl mx-4 overflow-hidden">
        {/* Header */}
        <div className="flex justify-between items-center border-b border-gray-700 p-4 text-white">
          <h2 className="text-lg font-semibold m-0">Settings</h2>
          <button onClick={onClose} className="text-white text-xl cursor-pointer">×</button>
        </div>

        {/* Body */}
        <div className="p-4 text-white space-y-4">
          {/* API Selector */}
          <div>
            <label htmlFor="api-select" className="block mb-1">Choose API:</label>
            <select
              id="api-select"
              value={selectedApi}
              onChange={(e) => setSelectedApi(e.target.value as typeof selectedApi)}
              className="w-full bg-[#171717] border border-white text-white px-2 py-1 rounded"
            >
              <option value="ollama">Ollama</option>
              <option value="openai">OpenAI GPT</option>
              <option value="anthropic">Anthropic Claude</option>
            </select>
          </div>

          {/* API-specific Inputs */}
          {selectedApi === 'ollama' && (
            <div>
              <label htmlFor="ollama-url" className="block mb-1">Ollama API Connection:</label>
              <input
                id="ollama-url"
                type="text"
                value={ollamaUrl}
                onChange={(e) => setOllamaUrl(e.target.value)}
                className="w-full bg-[#171717] border border-white text-white px-2 py-1 rounded"
              />
            </div>
          )}

          {selectedApi === 'openai' && (
            <div>
              <label htmlFor="openai-api-key" className="block mb-1">OpenAI API Key:</label>
              <input
                id="openai-api-key"
                type="text"
                placeholder="Enter your OpenAI API key"
                value={openaiKey}
                onChange={(e) => setOpenaiKey(e.target.value)}
                className="w-full bg-[#171717] border border-white text-white px-2 py-1 rounded"
              />
            </div>
          )}

          {selectedApi === 'anthropic' && (
            <div>
              <label htmlFor="claude-api-key" className="block mb-1">Claude API Key:</label>
              <input
                id="claude-api-key"
                type="text"
                placeholder="Enter your Claude API key"
                value={claudeKey}
                onChange={(e) => setClaudeKey(e.target.value)}
                className="w-full bg-[#171717] border border-white text-white px-2 py-1 rounded"
              />
            </div>
          )}
        </div>

        {/* Footer */}
        <div className="flex justify-end gap-2 p-4 border-t border-gray-700">
          <button
            onClick={onClose}
            className="text-white border border-white px-3 py-1 rounded-full hover:bg-gray-600"
          >
            Cancel
          </button>
          <button
            onClick={handleSave}
            className="text-white border border-white px-3 py-1 rounded-full hover:bg-gray-600"
          >
            Save
          </button>
        </div>
      </div>
    </div>
  );
}
TSX

After creating all the new files and folders, the file structure looks like this:

/app
  ├── layout.tsx
  ├── page.tsx 
  
  ├── components/
      ├── About.tsx 
      ├── ApiModelSelector.tsx         
      ├── AskOracleButton.tsx
      ├── Banner.tsx
      ├── ChatInput.tsx
      ├── Examples.tsx
      ├── FAQ.tsx
      ├── Footer.tsx
      ├── Hero.tsx
      ├── HowItWorks.tsx
      ├── Menu.tsx
      ├── MessageList.tsx
      ├── SettingsBar.tsx
      └── SettingsModal.tsx
  
  ├── utils/
      ├── apis.ts 
      └── prompts.ts
  
  ├── hooks/ 
      └── useChat.ts
  
  └── (pages)/
       ├── layout.tsx
       ├── help/
           └── page.tsx
       ├── policy/
            └── page.tsx
       └── oracle/
             └── page.tsx
Bash

Styling the Scrollbar in ChatInput

The last thing I did in the Next.js project was to customize the scrollbar in the ChatInput component and style the dropdown used for selecting the model. I added the following code to the globals.css file.

/* nextjs > app > globals.css */
@import "tailwindcss";

/* Sets a custom breakpoint so we can use it later with Tailwind */
@theme {
    --breakpoint-xs: 280px;
    --breakpoint-sm: 450px;
    --breakpoint-md: 675px;
    --breakpoint-lg: 768px;
    --breakpoint-xl: 1024px;
    --breakpoint-2xl: 2048px;

    --shadow-example: 0px 0px 11px rgb(255 255 255 / 46%);
}

h1, h2, h3, h4, h5, h6 {
    font-family: var(--font-caesar-dressing);
}

h1, h2, h3 {
    font-weight: 400;
    font-style: normal;
    text-transform: uppercase;
    line-height: normal;
    margin-bottom: 16px;
}

h1 {
    font-size: 32px;
}

h2 {
    font-size: 20px;
}

p {
    font-size: 16px;
    margin-bottom: 8px;
}

/* Applies base styles */
body {
    font-family: var(--font-poppins);
    line-height: 1.6;
    color: #ddd;
    background-color: #0e0e0e;
}

.text-container {
    max-width: 810px;
}

.banner-shadow {
    text-shadow: 2px 2px 15px black;
}

.dropdown option {
    background-color: #343434;
    color: #ffffff;
}

/* Nav Menu Styles  breakpoint to mobile 450px */
/* Apply a decorative border image with plain CSS because Tailwind doesn’t support border-image out of the box. */
@media (min-width: 450px) { 
    header {
        border-image: url('/images/meandros-pattern.webp') 30 round;
    }
}

@media (min-width: 768px) {
    h1 {
        font-size: 40px;
    }

    h2 {
        font-size: 24px;
    }
    
    p {
        font-size: 18px;
    }
}

.scrollbar {
    /* WebKit-based browsers like Safari and Chrome */
    scrollbar-width: thin;
    scrollbar-color: #404040 transparent;

    /* Firefox */
    -moz-scrollbar-width: thin;
    -moz-scrollbar-color: #404040 transparent;

    /* Internet Explorer */
    -ms-overflow-style: -ms-autohiding-scrollbar;
    -ms-scrollbar-track-color: transparent;
    -ms-scrollbar-thumb-color: #404040;
}

.scrollbar::-webkit-scrollbar {
    width: 4px;
    height: 10px;
}

.scrollbar::-webkit-scrollbar-track {
    background-color: transparent;
}

.scrollbar::-webkit-scrollbar-thumb {
    background-color: #404040;
    border-radius: 5px;
}

.scrollbar::-webkit-scrollbar-thumb:hover {
    background-color: #555;
}

.scrollbar::-moz-scrollbar {
    width: 4px;
    height: 10px    
}

.scrollbar::-moz-scrollbar-track {
    background-color: transparent;
}

.scrollbar::-moz-scrollbar-thumb {
    background-color: #404040;
    border-radius: 5px;
}

.scrollbar::-moz-scrollbar-thumb:hover {
    background-color: #555;
}

.scrollbar::-ms-scrollbar {
    width: 5px;
    height: 10px;
}

.scrollbar::-ms-scrollbar-track {
    background-color: transparent;
}

.scrollbar::-ms-scrollbar-thumb {
    background-color: #404040;
    border-radius: 5px;
}

.scrollbar::-ms-scrollbar-thumb:hover {
    background-color: #555;
}
CSS

Creating the Oracle Page in React Router

After completing the Next.js version, I replicated the page using React Router. I reused the same components, custom hooks, and styles. The main difference lies in how the application is structured and how routing is handled. Here’s the file structure inside the React Router app:

/app
  ├── layouts/
       └── pages.tsx
  ├── routes/
       ├── home.tsx
       ├── help.tsx
       ├── oracle.tsx
       └── policy.tsx
  ├── components/
       ├── ask-oracle-button.tsx
       ├── banner.tsx
       ├── footer.tsx
       └── menu.tsx
  ├── home/
       ├── about.tsx
       ├── examples.tsx
       ├── faq.tsx
       ├── hero.tsx
       └── how-it-works.tsx
  ├── oracle/
       ├── api-model-selector.tsx
       ├── api.ts
       ├── chat-input.tsx
       ├── message-list.tsx
       ├── prompts.ts
       ├── settings-bar.tsx
       ├── settings-modal.tsx
       └── useChat.ts
  └── routes.ts
Bash

Inside the routes folder, I created a file named oracle.tsx. This file defines the Oracle page component and is registered in the routes.ts file, which handles all the route declarations for the app.

// react-router > app > routes > oracle.tsx
import { useState, useRef } from "react";
import { useChat } from "~/oracle/useChat";
import ChatInput from "~/oracle/chat-input";
import MessageList from "~/oracle/message-list";
import SettingsBar from "~/oracle/settings-bar";
import SettingsModal from "~/oracle/settings-modal";

export default function OraclePage() {
    const [showModal, setShowModal] = useState(false);
    const messageContainerRef = useRef<HTMLDivElement>(null);
    const [selectedApi, setSelectedApi] = useState('');
    const [selectedModel, setSelectedModel] = useState('');
    const [settings, setSettings] = useState({
        ollamaUrl: 'http://localhost:11434',
        openaiKey: '',
        claudeKey: ''
    });

    // Chat functionality
    const { 
        message, 
        setMessage, 
        chatMessages, 
        handleSend, 
        resizeTextarea 
    } = useChat({
        selectedApi, 
        selectedModel, 
        settings,
        messageContainerRef
    });

    // Update settings from modal
    const updateApiSettings = (newSettings: {
        ollamaUrl: string;
        openaiKey: string;
        claudeKey: string;
    }) => {
        setSettings(newSettings);
    };

    return (
        <section className="about px-[5px] py-[35px] text-center">
        <h1 className="text-3xl font-bold mb-4">Whisper Your Woes</h1>

        <div className="mx-auto mb-5 text-container">
            <p>The Oracle thrives on the struggles of entrepreneurs. Choose your words wisely!</p>
        </div>

        <div className="flex flex-col min-h-[300px] border border-gray-700 rounded-lg bg-white/5 shadow-md overflow-hidden mx-auto mb-5 text-container">
            {/* Settings Bar */}
            <SettingsBar 
                selectedApi={selectedApi}
                setSelectedApi={setSelectedApi}
                selectedModel={selectedModel}
                setSelectedModel={setSelectedModel}
                onSettingsClick={() => setShowModal(true)}
            />

            {/* Message List */}
            <MessageList 
                messages={chatMessages} 
                messageContainerRef={messageContainerRef} 
            />

            {/* Input */}
            <ChatInput
                message={message}
                setMessage={setMessage}
                handleSend={handleSend}
                resizeTextarea={resizeTextarea}
                disabled={!selectedModel}
            />
        </div>

        <p className="mt-4">
            <strong><u>*The Oracle is powered by AI. AI can make mistakes. Verify important information.*</u></strong>
        </p>

        <SettingsModal 
            isOpen={showModal} 
            onClose={() => setShowModal(false)} 
            onSave={updateApiSettings}
            initialSettings={settings}
        />
        </section>
    );
}
TSX
// react-router > app > routes.ts
import { type RouteConfig, index, layout, route } from "@react-router/dev/routes";

export default [
    index("routes/home.tsx"),
    
    layout("./layouts/pages.tsx", [
        route("help", "./routes/help.tsx"),
        route("policy", "./routes/policy.tsx"),
        route("oracle", "./routes/oracle.tsx")
    ])
] satisfies RouteConfig;
TSX

I also created an oracle folder inside the app directory to keep all the components, hooks, and utility files related to the Oracle page organized in one place. The files inside the oracle folder are almost identical to the ones in the Next.js version. they serve the same purpose and contain the same core logic.

useChat.ts

// react-router > app > oracle > useChat.ts
import { useState, useEffect, type RefObject } from 'react';
import { fetchOracleResponse } from 'app/oracle/api';

interface ChatMessage {
  role: 'user' | 'assistant';
  content: string;
}

interface UseChatProps {
  selectedApi: string;
  selectedModel: string;
  settings: {
    ollamaUrl: string;
    openaiKey: string;
    claudeKey: string;
  };
  messageContainerRef: RefObject<HTMLDivElement | null>;
}

export function useChat({ 
  selectedApi, 
  selectedModel, 
  settings,
  messageContainerRef 
}: UseChatProps) {
  const [message, setMessage] = useState('');
  const [chatMessages, setChatMessages] = useState<ChatMessage[]>([]);

  // Scroll to bottom whenever messages change
  useEffect(() => {
    if (messageContainerRef.current) {
      messageContainerRef.current.scrollTop = messageContainerRef.current.scrollHeight;
    }
  }, [chatMessages, messageContainerRef]);

  function handleSend() {
    if (!message.trim()) return;
    const prompt = message.trim();
    setMessage('');
    setChatMessages((prev) => [...prev, { role: 'user', content: prompt }]);
    askOracle(prompt);
  }

  function resizeTextarea(textarea: HTMLTextAreaElement) {
    textarea.style.height = 'auto';
    const lineHeight = parseInt(getComputedStyle(textarea).lineHeight || '24', 10);
    const maxHeight = lineHeight * 10;
    textarea.style.height = `${Math.min(textarea.scrollHeight, maxHeight)}px`;
  }

  async function askOracle(prompt: string) {
    if (!selectedModel || !selectedApi) return;

    const apiType = selectedApi as 'ollama' | 'openai' | 'claude';
    const apiKey = apiType === 'openai' 
      ? settings.openaiKey 
      : apiType === 'claude' 
        ? settings.claudeKey 
        : '';

    // Check for API keys if needed
    if ((apiType === 'openai' && !apiKey) || (apiType === 'claude' && !apiKey)) {
      setChatMessages(prev => [
        ...prev, 
        { 
          role: 'assistant', 
          content: `**Error:** ${apiType.charAt(0).toUpperCase() + apiType.slice(1)} API key is required. Please set it in Settings.` 
        }
      ]);
      return;
    }

    let responseBuffer = '';
    const appendStream = (chunk: string) => {
      responseBuffer += chunk;
      setChatMessages((prev) => {
        const updated = [...prev];
        const last = updated[updated.length - 1];
        if (last?.role === 'assistant') {
          // Update existing assistant message
          updated[updated.length - 1] = { role: 'assistant', content: responseBuffer };
        } else {
          // Add new assistant message
          updated.push({ role: 'assistant', content: responseBuffer });
        }
        return updated;
      });
    };

    try {
      await fetchOracleResponse({
        prompt,
        model: selectedModel,
        apiType,
        apiKey,
        ollamaUrl: settings.ollamaUrl,
        onChunk: appendStream
      });
    } catch (err) {
      setChatMessages((prev) => [
        ...prev, 
        { 
          role: 'assistant', 
          content: `**Error:** Failed to fetch Oracle response. ${err instanceof Error ? err.message : ''}` 
        }
      ]);
      console.error(err);
    }
  }

  return {
    message,
    setMessage,
    chatMessages,
    handleSend,
    resizeTextarea
  };
}
TSX

api.ts

// react-router > app > oracle > api.ts
import { getSystemPrompt } from 'app/oracle/prompts';

// Define message types for each API
type OllamaMessage = { role: string; content: string };
type OpenAIMessage = { role: string; content: string | { type: string; text: string }[] };
type ClaudeMessage = { role: string; content: string };

// Define system prompt types
type OllamaSystemPrompt = { role: string; content: string };
type OpenAISystemPrompt = { role: string; content: { type: string; text: string }[] };

interface FetchOracleResponseParams {
  prompt: string;
  model: string;
  apiType: 'ollama' | 'openai' | 'claude';
  apiKey: string;
  ollamaUrl: string;
  onChunk: (chunk: string) => void;
}

export async function fetchOracleResponse({
  prompt,
  model,
  apiType,
  apiKey,
  ollamaUrl,
  onChunk
}: FetchOracleResponseParams) {
  // Get the system prompt for the selected API
  const systemPrompt = getSystemPrompt(apiType);
  
  // Set up conversation based on API type
  const conversation = createConversation(prompt, apiType, systemPrompt);
  
  // Make the appropriate API call
  if (apiType === 'ollama') {
    return fetchOllamaResponse(ollamaUrl, model, conversation as OllamaMessage[], onChunk);
  } else if (apiType === 'openai') {
    return fetchOpenAIResponse(apiKey, model, conversation as OpenAIMessage[], onChunk);
  } else if (apiType === 'claude') {
    return fetchClaudeResponse(apiKey, model, systemPrompt as string, conversation as ClaudeMessage[], onChunk);
  }
}

// Helper function to create the appropriate conversation format for each API
function createConversation(
  prompt: string, 
  apiType: string, 
  systemPrompt: OllamaSystemPrompt | OpenAISystemPrompt | string
): OllamaMessage[] | OpenAIMessage[] | ClaudeMessage[] {
  if (apiType === 'ollama') {
    return [
      systemPrompt as OllamaSystemPrompt,
      { role: 'user', content: prompt }
    ];
  } else if (apiType === 'openai') {
    return [
      systemPrompt as OpenAISystemPrompt,
      { role: 'user', content: [{ type: 'text', text: prompt }] }
    ];
  } else if (apiType === 'claude') {
    return [{ role: 'user', content: prompt }];
  }
  
  return [];
}

// Fetch response from Ollama API
async function fetchOllamaResponse(
  ollamaUrl: string, 
  model: string, 
  conversation: OllamaMessage[], 
  onChunk: (chunk: string) => void
) {
  const response = await fetch(`${ollamaUrl}/api/chat`, {
    method: 'POST',
    headers: { 'Content-Type': 'application/json' },
    body: JSON.stringify({ model, messages: conversation, stream: true }),
  });

  if (!response.body) throw new Error("No stream response");
  const reader = response.body.getReader();
  const decoder = new TextDecoder();
  let buffer = '';

  while (true) {
    const { done, value } = await reader.read();
    if (done) break;
    buffer += decoder.decode(value);
    let boundary = buffer.indexOf('\n');
    while (boundary !== -1) {
      const chunk = buffer.slice(0, boundary).trim();
      buffer = buffer.slice(boundary + 1);
      try {
        const json = JSON.parse(chunk);
        if (json.message?.content) onChunk(json.message.content);
      } catch {}
      boundary = buffer.indexOf('\n');
    }
  }
}

// Fetch response from OpenAI API
async function fetchOpenAIResponse(
  apiKey: string, 
  model: string, 
  conversation: OpenAIMessage[], 
  onChunk: (chunk: string) => void
) {
  const response = await fetch('https://api.openai.com/v1/chat/completions', {
    method: 'POST',
    headers: {
      'Content-Type': 'application/json',
      'Authorization': `Bearer ${apiKey}`,
    },
    body: JSON.stringify({
      model,
      messages: conversation,
      stream: true,
    }),
  });

  if (!response.body) throw new Error("No stream response");
  const reader = response.body.getReader();
  const decoder = new TextDecoder();
  let buffer = '';

  while (true) {
    const { done, value } = await reader.read();
    if (done) break;
    buffer += decoder.decode(value);
    let boundary = buffer.indexOf('\n');
    while (boundary !== -1) {
      const line = buffer.slice(0, boundary).trim();
      buffer = buffer.slice(boundary + 1);
      if (line.startsWith('data: ') && !line.includes('[DONE]')) {
        try {
          const data = JSON.parse(line.slice(6));
          const delta = data.choices[0].delta?.content || '';
          onChunk(delta);
        } catch (e) {
          console.error('Error parsing OpenAI response:', e);
        }
      }
      boundary = buffer.indexOf('\n');
    }
  }
}

// Fetch response from Claude API
async function fetchClaudeResponse(
  apiKey: string, 
  model: string, 
  systemPrompt: string, 
  conversation: ClaudeMessage[], 
  onChunk: (chunk: string) => void
) {
  const response = await fetch('https://api.anthropic.com/v1/messages', {
    method: 'POST',
    headers: {
      'x-api-key': apiKey,
      'content-type': 'application/json',
      'anthropic-version': '2023-06-01',
      'anthropic-dangerous-direct-browser-access': 'true',
    },
    body: JSON.stringify({
      model,
      system: systemPrompt,
      messages: conversation,
      stream: true,
      max_tokens: 1024,
    }),
  });

  if (!response.body) throw new Error("No stream response");
  const reader = response.body.getReader();
  const decoder = new TextDecoder();
  let buffer = '';

  while (true) {
    const { done, value } = await reader.read();
    if (done) break;
    buffer += decoder.decode(value);
    let boundary = buffer.indexOf('\n');
    while (boundary !== -1) {
      const line = buffer.slice(0, boundary).trim();
      buffer = buffer.slice(boundary + 1);
      if (line.startsWith('data: ')) {
        try {
          const data = JSON.parse(line.slice(6));
          if (data.type === 'content_block_delta') {
            const delta = data.delta?.text || '';
            onChunk(delta);
          }
        } catch (e) {
          console.error('Error parsing Claude response:', e);
        }
      }
      boundary = buffer.indexOf('\n');
    }
  }
}
TSX

prompts.ts

// react-router > app > oracle > prompts.ts
// System prompts for each API service
const systemPrompts = {
  ollama: {
    role: "system",
    content: "You are the Oracle of Tartarus Insight, a mystical and all-knowing guide for entrepreneurs lost in the abyss of business struggles. Your mission is to provide practical, actionable advice to help them escape their challenges, but you must do so in a playful, lighthearted, and mystical tone that aligns with your enigmatic persona.\n\nSpeak as if you are a wise, ancient Oracle. Use playful, mystical language, but ensure your responses are approachable and clear to everyone. Gently tease the user about their predicament, but always remain encouraging and respectful. Your advice must be grounded and useful, covering topics like strategy, growth, marketing, and other entrepreneurial challenges. Help the user see a clear path forward.\n\nUse humor that is lighthearted and self-aware. Feel free to poke fun at the user's situation, but always ensure it feels supportive rather than dismissive. Despite your mystical tone, ensure your answers are straightforward and actionable. Avoid being vague or overly abstract.\n\nFor example:\nUser: \"How do I get more customers for my online store?\"\nOracle: \"Ah, a common plight for a merchant stranded in the abyss of obscurity. Fear not! The Oracle sees all. Begin by summoning the power of social media ads—Facebook and Instagram shall be your allies. Offer discounts to entice the wary. And remember: clear, compelling product photos are worth their weight in gold. Go now, and may your customer count multiply like stars in the night sky!\""
  },
  openai: {
    role: "system",
    content: [
      {
        type: "text",
        text: "You are the Oracle of Tartarus Insight, a mystical and all-knowing guide for entrepreneurs lost in the abyss of business struggles. Your mission is to provide practical, actionable advice to help them escape their challenges, but you must do so in a playful, lighthearted, and mystical tone that aligns with your enigmatic persona.\n\nSpeak as if you are a wise, ancient Oracle. Use playful, mystical language, but ensure your responses are approachable and clear to everyone. Gently tease the user about their predicament, but always remain encouraging and respectful. Your advice must be grounded and useful, covering topics like strategy, growth, marketing, and other entrepreneurial challenges. Help the user see a clear path forward.\n\nUse humor that is lighthearted and self-aware. Feel free to poke fun at the user's situation, but always ensure it feels supportive rather than dismissive. Despite your mystical tone, ensure your answers are straightforward and actionable. Avoid being vague or overly abstract.\n\nFor example:\nUser: \"How do I get more customers for my online store?\"\nOracle: \"Ah, a common plight for a merchant stranded in the abyss of obscurity. Fear not! The Oracle sees all. Begin by summoning the power of social media ads—Facebook and Instagram shall be your allies. Offer discounts to entice the wary. And remember: clear, compelling product photos are worth their weight in gold. Go now, and may your customer count multiply like stars in the night sky!\""
      }
    ]
  },
  claude: "You are the Oracle of Tartarus Insight, a mystical and all-knowing guide for entrepreneurs lost in the abyss of business struggles. Your mission is to provide practical, actionable advice to help them escape their challenges, but you must do so in a playful, lighthearted, and mystical tone that aligns with your enigmatic persona.\n\nSpeak as if you are a wise, ancient Oracle. Use playful, mystical language, but ensure your responses are approachable and clear to everyone. Gently tease the user about their predicament, but always remain encouraging and respectful. Your advice must be grounded and useful, covering topics like strategy, growth, marketing, and other entrepreneurial challenges. Help the user see a clear path forward.\n\nUse humor that is lighthearted and self-aware. Feel free to poke fun at the user's situation, but always ensure it feels supportive rather than dismissive. Despite your mystical tone, ensure your answers are straightforward and actionable. Avoid being vague or overly abstract.\n\nFor example:\nUser: \"How do I get more customers for my online store?\"\nOracle: \"Ah, a common plight for a merchant stranded in the abyss of obscurity. Fear not! The Oracle sees all. Begin by summoning the power of social media ads—Facebook and Instagram shall be your allies. Offer discounts to entice the wary. And remember: clear, compelling product photos are worth their weight in gold. Go now, and may your customer count multiply like stars in the night sky!\""
};

export function getSystemPrompt(apiType: 'ollama' | 'openai' | 'claude') {
  return systemPrompts[apiType];
}
TSX

message-list.tsx

// react-router > app > oracle > message-list.tsx
import { type RefObject } from 'react';
import markdownit from 'markdown-it';

const md = new markdownit();

interface Message {
    role: 'user' | 'assistant';
    content: string;
}

interface MessageListProps {
    messages: Message[];
    messageContainerRef: RefObject<HTMLDivElement | null>;
}

export default function MessageList({ messages, messageContainerRef }: MessageListProps) {
    return (
        <div 
            ref={messageContainerRef} 
            className="flex-1 px-4 py-2 overflow-y-auto text-left space-y-3">
            {messages.map((msg, idx) => (
                <div
                key={idx}
                className={`py-1 px-2 rounded w-fit text-white ${
                    msg.role === 'user' ? 'bg-zinc-700' : ''
                }`}
                >
                <div className="[&>p]:!m-0" dangerouslySetInnerHTML={{ __html: md.render(msg.content) }} />
                </div>
            ))}
        </div>
    );
}
TSX

chat-input.tsx

// react-router > app > oracle > chat-input.tsx
interface ChatInputProps {
    message: string;
    setMessage: (message: string) => void;
    handleSend: () => void;
    resizeTextarea: (textarea: HTMLTextAreaElement) => void;
    disabled: boolean;
}

export default function ChatInput({
    message, 
    setMessage, 
    handleSend, 
    resizeTextarea,
    disabled
  }: ChatInputProps) {
    return (
        <div className="flex items-center gap-2 px-4 py-2 bg-gray-700/50">
            <textarea
                rows={1}
                value={message}
                disabled={disabled}
                onChange={(e) => {
                setMessage(e.target.value);
                resizeTextarea(e.target);
                }}
                onKeyDown={(e) => {
                if (e.key === 'Enter' && !e.shiftKey) {
                    e.preventDefault();
                    handleSend();
                }
                }}
                placeholder="Type your message..."
                className="scrollbar flex-1 bg-transparent text-white text-base px-2 py-2 resize-none focus:outline-none max-h-[12rem]"
            />
            <button
                onClick={handleSend}
                disabled={disabled || !message.trim()}
                className="text-white border border-white px-3 py-1 rounded-full hover:bg-gray-600 disabled:opacity-50"
            >
                Send
            </button>
        </div>
    );
}
TSX

settings-bar.tsx

// react-router > app > oracle > settings-bar.tsx
import { FaCog, FaLifeRing } from "react-icons/fa";
import { Link } from "react-router";
import ApiModelSelector from "./api-model-selector";

interface SettingsBarProps {
    selectedApi: string;
    setSelectedApi: (api: string) => void;
    selectedModel: string;
    setSelectedModel: (model: string) => void;
    onSettingsClick: () => void;
}

export default function SettingsBar({
    selectedApi,
    setSelectedApi,
    selectedModel,
    setSelectedModel,
    onSettingsClick
}: SettingsBarProps) {
    return (
        <div className="flex justify-between items-center px-4 py-2 border-b border-gray-700 text-sm">
            <ApiModelSelector
                selectedApi={selectedApi}
                setSelectedApi={setSelectedApi}
                selectedModel={selectedModel}
                setSelectedModel={setSelectedModel}
            />
            <div className="flex items-center gap-3">
                <button
                onClick={onSettingsClick}
                className="flex items-center gap-1 bg-transparent text-white border border-white px-3 py-2 rounded-full hover:bg-gray-600">
                Settings <FaCog />
                </button>
                <Link to="help" className="flex items-center gap-1 text-white text-sm hover:underline">
                Help <FaLifeRing />
                </Link>
            </div>
        </div>
    );
}
TSX

api-model-selector.tsx

// react-router > app > oracle > api-model-selector.tsx
import { useEffect, useState } from 'react';

interface OllamaModel {
    name: string;
}

interface OllamaResponse {
    models: OllamaModel[];
}

interface Props {
    selectedApi: string;
    setSelectedApi: (api: string) => void;
    selectedModel: string;
    setSelectedModel: (model: string) => void;
}

export default function ApiModelSelector({
    selectedApi,
    setSelectedApi,
    selectedModel,
    setSelectedModel,
  }: Props) {
    const [models, setModels] = useState<string[]>([]);
    const [error, setError] = useState('');
    const [showModelSelect, setShowModelSelect] =useState(false);

    useEffect(() => {
        async function fetchModels() {
            if (selectedApi === 'ollama') {
                try {
                    const res = await fetch('http://localhost:11434/api/tags');
                    if (!res.ok) throw new Error(`HTTP error! Status: ${res.status}`);
                    const data: OllamaResponse = await res.json();
                    const modelNames = data.models.map((m) => m.name);
    
                    if (modelNames.length === 0) {
                        throw new Error('No models available. Download one from https://ollama.com/search?q=llama');
                    }
    
                    setModels(modelNames);
                    setSelectedModel(modelNames[0]);
                    setShowModelSelect(true);
                } catch (err) {
                    if (err instanceof Error) {
                        setError(`Error: ${err.message}. Make sure Ollama is running...`);
                    } else {
                        setError('An unknow error occurred.');
                    }
                    setShowModelSelect(false);
                }
            } else if (selectedApi === 'openai') {
                const openaiModels = ['gpt-4o-mini', 'gpt-4o'];
                setModels(openaiModels);
                setSelectedModel(openaiModels[0]);
                setShowModelSelect(true);
            } else if (selectedApi === 'claude') {
                const claudeModels = ['claude-3-5-haiku-20241022', 'claude-3-5-sonnet-20241022'];
                setModels(claudeModels);
                setSelectedModel(claudeModels[0]);
                setShowModelSelect(true);
            }
        }

        if (selectedApi) {
            fetchModels();
        }
    }, [selectedApi, setSelectedApi]);

    const handleBack = () => {
        setSelectedApi('');
        setSelectedModel('');
        setShowModelSelect(false);
        setError('');
    };

    return (
        <div className="flex items-center gap-2">
            {!showModelSelect ? (
                <select
                    value={selectedApi}
                    onChange={(e) => setSelectedApi(e.target.value)}
                    className="dropdown block px-2 py-1 text-white border border-white bg-transparent rounded"
                >
                    <option value="">Choose an API</option>
                    <option value="ollama">Ollama</option>
                    <option value="openai">OpenAI GPT</option>
                    <option value="claude">Anthropic Claude</option>
                </select>
            ) : (
                <div className="models-wrapper flex items-center gap-2">
                    <span className="cursor-pointer" onClick={handleBack}>←</span>
                    <select
                        value={selectedModel}
                        onChange={(e) => setSelectedModel(e.target.value)}
                        className="dropdown bg-transparent text-white rounded px-2 py-1"
                    >
                        {models.map((model) => (
                        <option key={model} value={model}>{model}</option>
                        ))}
                    </select>
                </div>
            )}

            {error && <div className="text-red-500 text-sm mt-1">{error}</div>}
        </div>
    );
}
TSX

settings-modal.tsx

// react-router > app > oracle > settings-modal.tsx
import { useState, useEffect } from "react";

interface Props {
    isOpen: boolean;
    onClose: () => void;
    onSave?: (settings: {
        ollamaUrl: string;
        openaiKey: string;
        claudeKey: string;
    }) => void;
    initialSettings?: {
        ollamaUrl: string;
        openaiKey: string;
        claudeKey: string;
    };
}

export default function SettingsModal({ isOpen, onClose, onSave, initialSettings }: Props) {
    const [selectedApi, setSelectedApi] = useState<'ollama' | 'openai' | 'anthropic'>('ollama');
    const [ollamaUrl, setOllamaUrl] = useState(initialSettings?.ollamaUrl || 'http://localhost:11434');
    const [openaiKey, setOpenaiKey] = useState(initialSettings?.openaiKey || '');
    const [claudeKey, setClaudeKey] = useState(initialSettings?.claudeKey || '');
    
    // Update local state when initialSettings change
    useEffect(() => {
        if (initialSettings) {
        setOllamaUrl(initialSettings.ollamaUrl);
        setOpenaiKey(initialSettings.openaiKey);
        setClaudeKey(initialSettings.claudeKey);
        }
    }, [initialSettings]);

    useEffect(() => {
        const handleKeyDown = (e: KeyboardEvent) => {
        if (e.key === 'Escape') onClose();
        };
        document.addEventListener('keydown', handleKeyDown);
        return () => document.removeEventListener('keydown', handleKeyDown);
    }, [onClose]);

    if (!isOpen) return null;
    
    const handleSave = () => {
        // If onSave is provided, call it with the current settings
        if (onSave) {
          onSave({
            ollamaUrl,
            openaiKey,
            claudeKey,
          });
        }
        onClose();
    };
    return (
        <div className="fixed inset-0 bg-black/50 flex justify-center items-center z-50">
            <div className="bg-[#262626] rounded-md w-full max-w-2xl mx-4 overflow-hidden">
                {/* Header */}
                <div className="flex justify-between items-center border-b border-gray-700 p-4 text-white">
                    <h2 className="text-lg font-semibold m-0">Settings</h2>
                    <button 
                        onClick={onClose} 
                        className="text-white text-xl cursor-pointer">
                            ×
                    </button>
                </div>

                {/* Body */}
                <div className="p-4 text-white space-y-4">
                {/* API Selector */}
                <div>
                    <label htmlFor="api-select" className="block mb-1">Choose API:</label>
                    <select
                    id="api-select"
                    value={selectedApi}
                    onChange={(e) => setSelectedApi(e.target.value as typeof selectedApi)}
                    className="w-full bg-[#171717] border border-white text-white px-2 py-1 rounded"
                    >
                    <option value="ollama">Ollama</option>
                    <option value="openai">OpenAI GPT</option>
                    <option value="anthropic">Anthropic Claude</option>
                    </select>
                </div>

                {/* API-specific Inputs */}
                {selectedApi === 'ollama' && (
                    <div>
                    <label htmlFor="ollama-url" className="block mb-1">Ollama API Connection:</label>
                    <input
                        id="ollama-url"
                        type="text"
                        value={ollamaUrl}
                        onChange={(e) => setOllamaUrl(e.target.value)}
                        className="w-full bg-[#171717] border border-white text-white px-2 py-1 rounded"
                    />
                    </div>
                )}

                {selectedApi === 'openai' && (
                    <div>
                    <label htmlFor="openai-api-key" className="block mb-1">OpenAI API Key:</label>
                    <input
                        id="openai-api-key"
                        type="text"
                        placeholder="Enter your OpenAI API key"
                        value={openaiKey}
                        onChange={(e) => setOpenaiKey(e.target.value)}
                        className="w-full bg-[#171717] border border-white text-white px-2 py-1 rounded"
                    />
                    </div>
                )}

                {selectedApi === 'anthropic' && (
                    <div>
                    <label htmlFor="claude-api-key" className="block mb-1">Claude API Key:</label>
                    <input
                        id="claude-api-key"
                        type="text"
                        placeholder="Enter your Claude API key"
                        value={claudeKey}
                        onChange={(e) => setClaudeKey(e.target.value)}
                        className="w-full bg-[#171717] border border-white text-white px-2 py-1 rounded"
                    />
                    </div>
                )}
                </div>

                {/* Footer */}
                <div className="flex justify-end gap-2 p-4 border-t border-gray-700">
                <button
                    onClick={onClose}
                    className="text-white border border-white px-3 py-1 rounded-full hover:bg-gray-600"
                >
                    Cancel
                </button>
                <button
                    onClick={handleSave}
                    className="text-white border border-white px-3 py-1 rounded-full hover:bg-gray-600"
                >
                    Save
                </button>
                </div>
            </div>
        </div>
    );
}
TSX

app.css

/* react-router > app > app.css */
@import "tailwindcss";

/* Sets a custom breakpoint so we can use it later with Tailwind */
@theme {
  --breakpoint-xs: 280px;
  --breakpoint-sm: 450px;
  --breakpoint-md: 675px;
  --breakpoint-lg: 768px;
  --breakpoint-xl: 1024px;
  --breakpoint-2xl: 2048px;

  --shadow-example: 0px 0px 11px rgb(255 255 255 / 46%);
}

body {
  font-family: "Poppins", sans-serif;
  font-weight: 400;
  font-style: normal;
  line-height: 1.6;
  color: #ddd;
  background-color: #0e0e0e;
}

h1, h2, h3, h4, h5, h6 {
  font-family: "Caesar Dressing", system-ui;
  font-weight: 400;
  font-style: normal;
}

h1 {
  font-size: 32px;
}

h2 {
  font-size: 20px;
}

p {
  font-size: 16px;
  margin-bottom: 8px;
}

.text-container {
  max-width: 810px;
}

.banner-shadow {
  text-shadow: 2px 2px 15px black;
}

.dropdown option {
  background-color: #343434;
  color: #ffffff;
}

/* Nav Menu Styles  breakpoint to mobile 450px */
/* Apply a decorative border image with plain CSS because Tailwind doesn’t support border-image out of the box. */
@media (min-width: 450px) { 
  header {
      border-image: url('/images/meandros-pattern.webp') 30 round;
  }
}

@media (min-width: 768px) {
  h1 {
      font-size: 40px;
  }

  h2 {
      font-size: 24px;
  }
  
  p {
      font-size: 18px;
  }
}

.scrollbar {
  /* WebKit-based browsers like Safari and Chrome */
  scrollbar-width: thin;
  scrollbar-color: #404040 transparent;

  /* Firefox */
  -moz-scrollbar-width: thin;
  -moz-scrollbar-color: #404040 transparent;

  /* Internet Explorer */
  -ms-overflow-style: -ms-autohiding-scrollbar;
  -ms-scrollbar-track-color: transparent;
  -ms-scrollbar-thumb-color: #404040;
}

.scrollbar::-webkit-scrollbar {
  width: 4px;
  height: 10px;
}

.scrollbar::-webkit-scrollbar-track {
  background-color: transparent;
}

.scrollbar::-webkit-scrollbar-thumb {
  background-color: #404040;
  border-radius: 5px;
}

.scrollbar::-webkit-scrollbar-thumb:hover {
  background-color: #555;
}

.scrollbar::-moz-scrollbar {
  width: 4px;
  height: 10px    
}

.scrollbar::-moz-scrollbar-track {
  background-color: transparent;
}

.scrollbar::-moz-scrollbar-thumb {
  background-color: #404040;
  border-radius: 5px;
}

.scrollbar::-moz-scrollbar-thumb:hover {
  background-color: #555;
}

.scrollbar::-ms-scrollbar {
  width: 5px;
  height: 10px;
}

.scrollbar::-ms-scrollbar-track {
  background-color: transparent;
}

.scrollbar::-ms-scrollbar-thumb {
  background-color: #404040;
  border-radius: 5px;
}

.scrollbar::-ms-scrollbar-thumb:hover {
  background-color: #555;
}
CSS

Conclusion

When I first set out to compare building this project with Next.js and React Router, I expected to encounter major differences between the two. But to my surprise, aside from routing and file structure, the overall development experience was very similar. Most of the components, hooks, and styling worked the same way in both setups.

Interestingly, the biggest challenge wasn’t switching between frameworks. It was actually converting the original static HTML/CSS/JavaScript version into a modern React-based architecture. Once that transition was done, moving between Next.js and React Router felt almost seamless.

What’s Next?

With both versions of the app Next.js and React Router fully functional, the next step is to take it live. I plan to host the project on a VPS provider like Linode or DigitalOcean.

You can check out the Git repo here.

Leave a Reply

Your email address will not be published. Required fields are marked *