Building a ChatGPT-Powered Support Widget in React

AI-powered chat assistants are no longer a luxury feature reserved for big tech companies. With the OpenAI API, you can embed a fully functional support widget into any React app in an afternoon — one that understands context, remembers the conversation, and responds in natural language.
In this guide we'll build a ChatGPT-powered support widget from scratch. We'll cover the theory behind how the OpenAI Chat API works, why calling it directly from React is a serious security mistake, how conversation history is maintained, and how to build both an inline chat component and a floating widget. We'll also show you the quick insecure version for local development and the proper secure version using a Node.js backend proxy.
How the OpenAI Chat API Works
The OpenAI Chat API is built around a single concept: a messages array. Every request you send to the API includes the full conversation history as an array of message objects, and the API responds with the next message in that conversation. There is no session on OpenAI's side — the API is completely stateless. You are responsible for storing the conversation history and sending it with every request.
Each message in the array has two properties: role and content. The role can be one of three values. "system" is a special message that sets the behaviour and personality of the assistant — it's typically the first message in the array and the user never sees it. "user" is a message from the human. "assistant" is a message from the AI. A typical conversation array looks like this:
[
{
role: "system",
content: "You are a helpful support assistant for ProjectSchool, a React learning platform. Answer questions about React, JavaScript, and web development clearly and concisely."
},
{
role: "user",
content: "What is the difference between props and state?"
},
{
role: "assistant",
content: "Props are read-only values passed into a component from its parent. State is data that a component manages internally and can update over time..."
},
{
role: "user",
content: "Can you give me a code example?"
}
]The system message is your most powerful tool. It's where you give the assistant its identity, its rules, and its knowledge about your specific product. A well-written system prompt is the difference between a generic chatbot and a support assistant that actually knows your platform.
Tokens are the unit of measurement the API uses for both input and output. Roughly speaking, one token is about four characters or three quarters of a word. Every message in your array consumes tokens — including all the previous conversation history. This is why very long conversations become expensive and why you may want to trim older messages from the history after a certain point.
Why You Must Never Call the OpenAI API Directly from React
This is the most critical security point in this entire guide. Your OpenAI API key is a secret credential that grants access to your OpenAI account and is billed to your credit card. If someone gets hold of it, they can make unlimited API calls at your expense.
When you call the OpenAI API from your React frontend — even using a VITE_ environment variable — your API key is bundled into the JavaScript that gets sent to every user's browser. Anyone can open the browser's developer tools, go to the Network tab, inspect the request headers, and read your API key in plain text. It takes about 30 seconds. Environment variables in Vite are not secret — they are baked into the public JavaScript bundle at build time.
The correct approach is to use your backend as a proxy. Your React frontend never touches the OpenAI API directly. Instead, it sends the user's message to your own Express endpoint. Your Express server — which runs on a server the user cannot inspect — holds the API key as a real secret environment variable, calls OpenAI on behalf of the frontend, and returns the response. The key never leaves your server.
How Conversation History is Maintained
Since the OpenAI API is stateless, maintaining conversation history is entirely your responsibility. In a React support widget, the conversation lives in a useState array. Every time the user sends a message, you append it to the array. When you get a response back from the API, you append that too. The full array is sent with every subsequent request.
This has an important implication for your UI: the messages array is the single source of truth for both what you display in the chat window and what you send to the API. You don't need a separate display array and API array — they're the same thing. Render the messages array in the UI, and send the messages array (minus any UI-only metadata) to the API.
For a production widget you'd also want to consider persistence. If the user refreshes the page, the conversation is lost because useState doesn't survive a page reload. You can persist the messages array to localStorage to restore the conversation on reload, or store it in a database if you want conversation history across devices.
Streaming vs Waiting for the Full Response
By default, the OpenAI API waits until the entire response is generated before sending it back. For short responses this is fine, but for longer answers the user might wait 5-10 seconds staring at a loading spinner before anything appears. This feels slow and unresponsive.
Streaming solves this by sending the response token by token as it's generated — the same way ChatGPT's own interface works, where you see words appearing in real time. The API sends the response as a stream of Server-Sent Events (SSE), and your client reads them incrementally and appends each chunk to the displayed message as it arrives.
Implementing streaming adds meaningful complexity — you need to handle SSE on the backend, read a ReadableStream on the frontend, and update a partially-built message in state as chunks arrive. For this guide we'll implement the simpler non-streaming version first to get a working widget, then add streaming as an enhancement. Both approaches are shown so you can choose based on your needs.
Part 1: Quick Version — Direct API Call (Development Only)
This version calls OpenAI directly from React. It is only safe for local development where your .env file never leaves your machine. Never deploy this to production. We're showing it first because it lets you get a working chat widget running in minutes without setting up a backend.
Install the OpenAI SDK in your React project:
npm install openaiAdd your API key to your local .env file:
# .env (never commit this)
VITE_OPENAI_API_KEY=sk-your-api-key-hereNow build the chat component. Create src/components/SupportChat.jsx:
// src/components/SupportChat.jsx (DEV ONLY — insecure)
import { useState, useRef, useEffect } from "react";
import OpenAI from "openai";
const client = new OpenAI({
apiKey: import.meta.env.VITE_OPENAI_API_KEY,
dangerouslyAllowBrowser: true, // Required for direct browser calls — dev only
});
const SYSTEM_PROMPT = {
role: "system",
content:
"You are a helpful support assistant. Answer questions clearly and concisely. If you don't know something, say so honestly.",
};
function SupportChat() {
const [messages, setMessages] = useState([]); // Conversation history
const [input, setInput] = useState("");
const [isLoading, setIsLoading] = useState(false);
const bottomRef = useRef(null);
// Auto-scroll to the latest message
useEffect(() => {
bottomRef.current?.scrollIntoView({ behavior: "smooth" });
}, [messages]);
const sendMessage = async () => {
if (!input.trim() || isLoading) return;
const userMessage = { role: "user", content: input.trim() };
const updatedMessages = [...messages, userMessage];
setMessages(updatedMessages);
setInput("");
setIsLoading(true);
try {
const response = await client.chat.completions.create({
model: "gpt-4o-mini", // Fast and cheap — ideal for support widgets
messages: [SYSTEM_PROMPT, ...updatedMessages],
max_tokens: 500,
});
const assistantMessage = {
role: "assistant",
content: response.choices[0].message.content,
};
setMessages((prev) => [...prev, assistantMessage]);
} catch (error) {
setMessages((prev) => [
...prev,
{ role: "assistant", content: "Sorry, something went wrong. Please try again." },
]);
} finally {
setIsLoading(false);
}
};
const handleKeyDown = (e) => {
if (e.key === "Enter" && !e.shiftKey) {
e.preventDefault();
sendMessage();
}
};
return (
<div className="support-chat">
<div className="chat-header">
<span className="chat-status-dot" />
<h3>Support Assistant</h3>
</div>
<div className="chat-messages">
{messages.length === 0 && (
<p className="chat-empty">Hi! How can I help you today?</p>
)}
{messages.map((msg, i) => (
<div key={i} className={`chat-bubble ${msg.role}`}>
{msg.content}
</div>
))}
{isLoading && (
<div className="chat-bubble assistant loading">
<span />
<span />
<span />
</div>
)}
<div ref={bottomRef} />
</div>
<div className="chat-input-row">
<textarea
value={input}
onChange={(e) => setInput(e.target.value)}
onKeyDown={handleKeyDown}
placeholder="Type a message... (Enter to send)"
rows={1}
disabled={isLoading}
/>
<button onClick={sendMessage} disabled={isLoading || !input.trim()}>
Send
</button>
</div>
</div>
);
}
export default SupportChat;Part 2: Secure Version — Node.js Backend Proxy
This is the correct production implementation. The API key lives on the server. React never sees it. The architecture is simple: React sends the conversation history to your Express endpoint, Express calls OpenAI with the API key, and returns the assistant's reply.
Step 1: Set Up the Express Endpoint
In your server folder, install the OpenAI SDK and add your API key to your server .env:
cd server
npm install openai# server/.env
OPENAI_API_KEY=sk-your-api-key-hereCreate a route file for the chat endpoint — server/routes/chat.js:
// server/routes/chat.js
const express = require("express");
const OpenAI = require("openai");
const router = express.Router();
const client = new OpenAI({
apiKey: process.env.OPENAI_API_KEY, // Safe — this never leaves the server
});
const SYSTEM_PROMPT = {
role: "system",
content:
"You are a helpful support assistant. Answer questions clearly and concisely. If you don't know something, say so honestly.",
};
// POST /api/chat
// Body: { messages: [{ role, content }] }
router.post("/", async (req, res) => {
const { messages } = req.body;
if (!messages || !Array.isArray(messages)) {
return res.status(400).json({ error: "messages array is required" });
}
try {
const response = await client.chat.completions.create({
model: "gpt-4o-mini",
messages: [SYSTEM_PROMPT, ...messages],
max_tokens: 500,
});
const reply = response.choices[0].message.content;
res.status(200).json({ reply });
} catch (error) {
console.error("OpenAI error:", error.message);
res.status(500).json({ error: "Failed to get response from AI" });
}
});
module.exports = router;Register the route in your server.js:
// server/server.js
app.use("/api/chat", require("./routes/chat"));Step 2: Update the React Component to Use Your Backend
Now update SupportChat.jsx to call your own backend instead of OpenAI directly. Notice how clean this version is — no OpenAI SDK import, no dangerouslyAllowBrowser, no API key anywhere near the frontend:
// src/components/SupportChat.jsx (Production — secure)
import { useState, useRef, useEffect } from "react";
const API_URL = import.meta.env.VITE_API_URL;
function SupportChat() {
const [messages, setMessages] = useState([]);
const [input, setInput] = useState("");
const [isLoading, setIsLoading] = useState(false);
const bottomRef = useRef(null);
useEffect(() => {
bottomRef.current?.scrollIntoView({ behavior: "smooth" });
}, [messages]);
const sendMessage = async () => {
if (!input.trim() || isLoading) return;
const userMessage = { role: "user", content: input.trim() };
const updatedMessages = [...messages, userMessage];
setMessages(updatedMessages);
setInput("");
setIsLoading(true);
try {
const res = await fetch(`${API_URL}/chat`, {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({ messages: updatedMessages }),
});
if (!res.ok) throw new Error("API request failed");
const data = await res.json();
setMessages((prev) => [
...prev,
{ role: "assistant", content: data.reply },
]);
} catch (error) {
setMessages((prev) => [
...prev,
{ role: "assistant", content: "Sorry, something went wrong. Please try again." },
]);
} finally {
setIsLoading(false);
}
};
const handleKeyDown = (e) => {
if (e.key === "Enter" && !e.shiftKey) {
e.preventDefault();
sendMessage();
}
};
return (
<div className="support-chat">
<div className="chat-header">
<span className="chat-status-dot" />
<h3>Support Assistant</h3>
</div>
<div className="chat-messages">
{messages.length === 0 && (
<p className="chat-empty">Hi! How can I help you today?</p>
)}
{messages.map((msg, i) => (
<div key={i} className={`chat-bubble ${msg.role}`}>
{msg.content}
</div>
))}
{isLoading && (
<div className="chat-bubble assistant loading">
<span /><span /><span />
</div>
)}
<div ref={bottomRef} />
</div>
<div className="chat-input-row">
<textarea
value={input}
onChange={(e) => setInput(e.target.value)}
onKeyDown={handleKeyDown}
placeholder="Type a message... (Enter to send)"
rows={1}
disabled={isLoading}
/>
<button onClick={sendMessage} disabled={isLoading || !input.trim()}>
Send
</button>
</div>
</div>
);
}
export default SupportChat;Part 3: From Inline Component to Floating Widget
The SupportChat component we've built so far is an inline component — it renders wherever you place it in the page layout. This works well embedded in a help page or a sidebar. But the classic support widget pattern is a floating button fixed to the bottom-right corner of the screen that opens and closes a chat panel. Here's how to wrap your existing component to achieve that.
Create a new wrapper component — src/components/FloatingSupport.jsx:
// src/components/FloatingSupport.jsx
import { useState } from "react";
import SupportChat from "./SupportChat";
// Chat bubble icon SVG
const ChatIcon = () => (
<svg viewBox="0 0 24 24" fill="none" stroke="currentColor" strokeWidth={2}>
<path d="M21 15a2 2 0 0 1-2 2H7l-4 4V5a2 2 0 0 1 2-2h14a2 2 0 0 1 2 2z" />
</svg>
);
// Close icon SVG
const CloseIcon = () => (
<svg viewBox="0 0 24 24" fill="none" stroke="currentColor" strokeWidth={2}>
<line x1="18" y1="6" x2="6" y2="18" />
<line x1="6" y1="6" x2="18" y2="18" />
</svg>
);
function FloatingSupport() {
const [isOpen, setIsOpen] = useState(false);
return (
<div className="floating-support">
{/* Chat panel — conditionally rendered */}
{isOpen && (
<div className="floating-chat-panel">
<SupportChat />
</div>
)}
{/* Floating toggle button */}
<button
className="floating-chat-btn"
onClick={() => setIsOpen((prev) => !prev)}
aria-label={isOpen ? "Close support chat" : "Open support chat"}
>
{isOpen ? <CloseIcon /> : <ChatIcon />}
</button>
</div>
);
}
export default FloatingSupport;Add the CSS to position the widget in the bottom-right corner. Add this to your global stylesheet:
/* Floating widget positioning */
.floating-support {
position: fixed;
bottom: 1.5rem;
right: 1.5rem;
z-index: 1000;
display: flex;
flex-direction: column;
align-items: flex-end;
gap: 0.75rem;
}
.floating-chat-panel {
width: 360px;
max-height: 500px;
border-radius: 1rem;
box-shadow: 0 8px 32px rgba(0, 0, 0, 0.18);
overflow: hidden;
animation: slideUp 0.2s ease;
}
@keyframes slideUp {
from { opacity: 0; transform: translateY(12px); }
to { opacity: 1; transform: translateY(0); }
}
.floating-chat-btn {
width: 56px;
height: 56px;
border-radius: 50%;
background: #10a37f; /* OpenAI green — swap for your brand colour */
border: none;
cursor: pointer;
display: flex;
align-items: center;
justify-content: center;
color: white;
box-shadow: 0 4px 16px rgba(0, 0, 0, 0.2);
transition: transform 0.15s ease;
}
.floating-chat-btn:hover {
transform: scale(1.08);
}
.floating-chat-btn svg {
width: 24px;
height: 24px;
}Now drop FloatingSupport into your App.jsx once and it appears on every page automatically:
// src/App.jsx
import FloatingSupport from "./components/FloatingSupport";
function App() {
return (
<div className="app">
{/* All your existing routes and layout */}
<Routes>...</Routes>
{/* Floating support widget — rendered once, visible everywhere */}
<FloatingSupport />
</div>
);
}Customising the System Prompt for Your Product
The system prompt is what transforms a generic AI into a support assistant for your specific product. A vague system prompt produces vague answers. A detailed one produces answers that feel like they came from someone who actually knows your platform. Here's a more complete example for a React learning platform:
const SYSTEM_PROMPT = {
role: "system",
content: `You are a support assistant for ProjectSchool, a React and MERN stack learning platform.
Your job is to help students with:
- React concepts (hooks, state, props, context, lifecycle)
- JavaScript fundamentals
- Node.js and Express basics
- MongoDB and Mongoose
- Debugging and error messages
- Understanding course content on ProjectSchool
Guidelines:
- Keep answers concise and beginner-friendly
- Use code examples when they help clarify
- If a student is stuck on a specific ProjectSchool exercise, encourage them to use the LiveChat for personalised help
- Do not answer questions unrelated to web development or the platform
- If you are unsure, say so — never guess or make up information`,
};Tips for Production
- Rate limit your /api/chat endpoint — without it, a single user (or a bot) can send thousands of requests and rack up a large OpenAI bill. Use express-rate-limit to cap requests per IP.
- Add a max history length — after 20 messages the conversation history gets long and expensive. Trim the oldest user/assistant pairs before sending to the API, but always keep the system prompt.
- Use gpt-4o-mini for support widgets — it's significantly cheaper than gpt-4o and fast enough for conversational use. Reserve gpt-4o for tasks that genuinely need deeper reasoning.
- Set a spending cap in your OpenAI dashboard — this is your safety net. Set a monthly limit so a billing surprise is impossible.
- Persist messages in localStorage if you want the conversation to survive a page reload — JSON.stringify/parse the messages array on save and load.
Final Thoughts
A ChatGPT-powered support widget is one of those features that takes an afternoon to build but adds genuine, immediate value to your users. The conversation history pattern — sending the full messages array with every request — is the foundation for every AI chat interface you'll ever build, whether it's a support widget, a coding assistant, or a full AI product.
The most important thing to take away from this guide is the security model: your API key lives on the server, your frontend never knows it exists, and your Express endpoint is the only thing that talks to OpenAI. Get that right and everything else is just UI.
Want to see how to add streaming responses so text appears word by word like ChatGPT, or how to give the assistant knowledge of your specific docs using embeddings? Drop a comment in LiveChat and I'll cover it in a follow-up guide.