Dashboard — Code Flow
This page explains how the dashboard behaves at runtime — how authentication works end to end, how data gets into the UI, and what happens when the user performs each major action.
Authentication flow
The full magic link sequence looks like this:
The user enters their email on /login. TanStack Form validates the address with a zod schema before making any network call. Once validated, authClient.signIn.magicLink is called with the email and callbackURL: "/auth/callback". The API generates a signed verification URL using BETTER_AUTH_SECRET and prints it to stdout (in the development setup, where email delivery is stubbed).
The user visits that URL. The browser lands on /auth/callback with the token in the query string. Because SSR is disabled for this route, the verification logic runs entirely in the browser — on mount, the component reads verifyToken or token from the URL and calls authClient.magicLink.verify. The API validates the token's signature, creates a session row, and issues a session cookie. The dashboard then navigates to /contents.
From that point, session validity is checked on every navigation into any route nested under /_authenticated. The layout's beforeLoad hook calls authClient.getSession() on each entry. If the session has expired or the cookie is absent, the navigation is cancelled and the user is redirected to /login. This happens synchronously before any child route renders.
Data fetching strategy
All server state is managed by TanStack Query. Each query is identified by a key (e.g. ["contents"], ["analytics", "daily"]) and the results are cached in a shared QueryClient. Mutations that modify server state call queryClient.invalidateQueries with the relevant key to trigger a background re-fetch.
The contents list takes an aggressive pre-fetch approach: on mount, it fires up to 10 requests to GET /api/contents in parallel, each fetching 100 items (one page). This loads the full dataset — up to 1,000 items — into the cache in a single burst. The tradeoff is a heavier initial load, but the benefit is that all subsequent filtering and pagination happen instantly without any additional requests.
The analytics page fires 6 independent queries simultaneously. Because TanStack Query runs concurrent fetches natively, all six resolve as soon as their respective API calls return, rather than waiting for each other.
The content detail page makes a single request that returns both the article record and its full prediction history. These are merged server-side and returned as one response object.
All requests include credentials: "include" so the session cookie is forwarded on every call.
Client-side filtering and pagination
Because the contents list loads its full dataset upfront, every filter the user applies runs entirely in the browser through TanStack Table's filtering system. Title filtering uses a substring match against the baslik field. Source, model category, approved category, and language filters use exact match. Date filtering uses a custom function that compares the yayim_tarihi (publish date) value against an optional start and end bound.
Pagination is also client-side, at 16 rows per page. Changing the page or adjusting a filter never triggers a network request — the table re-derives visible rows from the in-memory dataset.
This means the UI is consistently fast once the initial load completes, but it also means that newly processed articles won't appear until the dataset is refreshed. The SSE connection (described below) is what bridges that gap.
Real-time updates via SSE
When the contents page mounts, the useSSE hook opens a persistent EventSource connection to GET /api/events. This endpoint on the API side bridges the processed_content_aa Kafka topic: whenever the consumer finishes classifying an article, it emits a message on that topic, the API's SSE bridge picks it up and pushes a processed_content event to all connected clients.
The dashboard does not automatically refresh on receiving an event. Instead, hasNewUpdates is set to true, which renders a banner at the top of the contents page informing the user that new data is available. This is intentional — an automatic refresh could interrupt the user mid-filter or mid-scroll. The user clicks the banner to confirm, which calls queryClient.invalidateQueries(["contents"]). TanStack Query re-fires all the page requests, the cache is updated, and the banner is dismissed.
The EventSource connection stays open for the entire time the contents page is mounted. The browser handles reconnection automatically if the connection drops.
Category override flow
The content detail page lets the user set the human-approved kategori for an article. The dropdown is pre-populated with the item's current kategori value (the human-assigned one). The model-predicted model_kategori is displayed separately and cannot be edited from the UI.
When the user saves a selection, the dashboard sends PATCH /api/contents/:id/category with the integer category value (1–7). The API writes only to the kategori column — it never touches model_kategori. This separation is what allows the analytics comparison endpoint to compute meaningful agreement rates: both values exist independently and can diverge freely.
On a successful patch, TanStack Query invalidates two queries: ["content", contentId] (the detail view) and ["contents"] (the list). Both re-fetch in the background, so the updated category badge on the detail page and the approved category column in the list reflect the change without a manual reload.
Requeue flow
Requeueing is available in two places: the contents list (for any item) and the pending page (for unclassified items). In both cases, the user selects one or more rows and initiates the requeue action.
The dashboard POSTs to /api/contents/requeue with an array of UUIDs. The API loads each item, checks that baslik (title) and ozet (summary) are non-null, and silently skips any that are missing either field. For qualifying items, it emits { id: source_id, baslik, ozet } to the raw_content_aa Kafka topic. The consumer is subscribed to this topic and will re-run ML inference on each received message.
The response from the API includes the count of items actually requeued. From the dashboard's perspective, the relevant query is invalidated on success so any status changes are reflected in the table.
Trigger fetch flow
The Trigger Fetch button in the sidebar footer is accessible from every authenticated page. It sends a POST to /api/trigger with no request body. The API emits a message with the current timestamp to the fetch_content_aa Kafka topic. The producer service has a consumer on that topic — receiving any message causes it to start a new AA API fetch run immediately, bypassing its cron schedule.
The dashboard shows a Kumo toast to confirm success or report an error. No query invalidation happens here: the newly fetched articles won't appear in the contents list until the producer has stored them, the consumer has classified them, and the user either receives an SSE event or manually refreshes the page.