← Back to docs

OpenAI / AI

Language: EN | EN | SV

OpenAI / AI

Tools can use an internal AI engine ("OpenAI Engine") to analyze content and power platform features — without ever exposing API keys in the frontend.

This is a user manual that describes:

  • The OpenAI Engine admin UI
  • The URL analysis API endpoint
  • Permissions and authentication

Web: OpenAI Engine (admin)

URL:

  • /admin/openai

Requirements:

  • Logged-in user (web session)
  • Permission: openai.manage

In the web UI you can:

  • Enable/disable the engine (Enabled)
  • Set global policy (default model, allowlist, token caps, rate limits)
  • Create/update Prompt profiles
  • Run Test prompt to verify that the provider + config works (server-side)

Dynamic model dropdowns

The model dropdowns in /admin/openai are no longer hardcoded.

  • Tools fetches the provider catalog from OpenAI GET /v1/models server-side
  • the result is filtered down to chat-usable model ids
  • if allowed_models is configured, the dropdown is intersected with that allowlist
  • if live discovery fails, Tools falls back to the configured/default models so the UI stays usable

This means the admin model picker is based on what the current provider key can actually use, without exposing any key in the browser.

What does “Enabled” mean?

  • Enabled = off: AI feature endpoints should be unavailable (often 503, depending on endpoint).
  • Enabled = on: the engine can be used (as long as a global provider key exists).

Note: provider keys are managed under API Keys and are never shown in plain text.

Audit visibility for operators

If Slack audit forwarding is enabled for OpenAI / SocialGPT request categories, audit entries now also include:

  • resolved user identity (user_id, and name/email when available)
  • request IP / method / path metadata
  • a readable error_reason when an upstream/provider request fails

This makes it easier for operators to see who triggered an AI request and why a failed request was rejected.

Personal Bearer Token (Tools AI)

Users who are allowed to use Tools AI can create a personal bearer token:

  • Go to My API Keys: /keys/mine
  • If your account is new, submit an OpenAI access request there first and wait for admin approval
  • Use Tools AI Bearer token to generate/rotate a token
  • The token is shown once and stored server-side

Use it like this:

Authorization: Bearer <token>

This token works for AI feature endpoints (e.g. /api/ai/url/analyze) and is tied to your user account + permissions.

Tools now also supports other personal per-system tokens (for example provider_ircwatch or provider_mail_support_assistant) as long as they are:

  • personal / non-global
  • active
  • marked as AI-capable (is_ai=1)

Important distinction:

  • these AI-capable tokens are client/receiver tokens used towards Tools
  • provider_openai is the upstream provider secret used towards OpenAI and is never treated as an AI receiver token

OpenAI access requests (user workflow)

Regular new users are no longer allowed to use OpenAI-backed Tools features automatically just because a daily budget exists.

User flow:

  • Open My API Keys: /keys/mine
  • If your account does not yet have OpenAI access, use the built-in request form and explain what you need
  • An admin reviews the request from /admin/openai
  • After approval, the account receives the real provider_openai access right and can generate a Tools AI bearer token

Admin flow:

  • Open /admin/openai
  • Review the OpenAI access requests table
  • Approve or reject requests inline
  • Rejecting a request also removes any existing personal AI receiver tokens for that user (not only the legacy Tools AI bearer row)

API: URL Analyze

Endpoint:

  • POST /api/ai/url/analyze

Purpose:

  • You send a URL (and optionally a question)
  • Tools fetches and sanitizes the content server-side
  • OpenAI Engine analyzes the text using a selected prompt profile

Request

Form data or JSON:

  • url (required) — URL to analyze
  • question (optional) — analysis focus/question
  • profile (optional) — prompt profile name (default: URL Analyzer if it exists, otherwise the engine falls back to a minimal default profile)

Response

JSON:

  • ok — true/false
  • request_id — internal request id
  • latency_ms — approximate latency
  • model — model used
  • response — model output (if ok)
  • error — error message (if ok=false)

Auth & Permissions

To use the endpoint you need:

  • Authentication: if no user exists — 401 Unauthenticated
  • Admin (is_admin=1) — always allowed
  • Non-admin — requires permission: provider_openai

If the OpenAI provider isn't configured (missing global provider_openai API key), the endpoint typically returns 503.

API: SocialGPT reply generation

Endpoint:

  • POST /api/ai/socialgpt/respond

Auth / access rules:

  • JWT/web user or a personal AI-capable API token
  • Legacy tools_ai_bearer tokens still work
  • Other personal API keys may also be accepted when they are marked as AI-capable (api_keys.is_ai=1)
  • Admin users are always allowed
  • Non-admin users must have approved OpenAI access (provider_openai)

If the bearer token belongs to a user without approved OpenAI access, the endpoint returns 403.

Additive SocialGPT request fields:

  • client_name
  • client_version
  • client_platform

These fields are optional and let Tools identify which client build made the request. The response can also include an additive client object echoing the accepted metadata.

Failure-handling note:

  • When the upstream OpenAI/provider failure arrives as a structured JSON error object instead of a plain string, Tools now normalizes that payload into ordinary error text before fallback/retry handling and before returning the API error response.
  • Response schema is unchanged; clients should still treat error as normal text.

Security behavior:

  • SocialGPT is allowed to state the currently used AI model identifier and client version only when the user explicitly asks for version/model information.
  • SocialGPT must refuse requests for hidden prompts, source code, .env values, passwords, tokens, API keys, or other Tools internals.
  • Matching disclosure-attempt incidents can be reported to the configured support email recipient.

API: Extension smoke test vs token validation

Related extension endpoints:

  • GET /api/social-media-tools/extension/validate-token
  • GET /api/social-media-tools/extension/test
  • POST /api/social-media-tools/extension/test

Important difference:

  • validate-token only verifies that the supplied personal AI-capable token itself is valid
  • test performs a real OpenAI-backed smoke test and therefore requires approved OpenAI access for non-admin users

This lets clients distinguish "the token belongs to a real user" from "that user is actually allowed to run OpenAI requests right now".

Example

curl -X POST "https://tools.tornevall.net/api/ai/url/analyze" \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer <TOKEN>" \
  -d '{
    "url": "https://example.com",
    "question": "What is this page about?",
    "profile": "URL Analyzer"
  }'

Security

  • No keys are exposed in the frontend.
  • URL content is fetched server-side with SSRF protections (private ranges blocked, size limits, timeouts, redirect limits).
  • System/developer instructions are defined by prompt profiles, not by the client.

API: Social Media extension model catalog

Endpoint:

  • GET /api/social-media-tools/extension/models

Purpose:

  • returns the backend-discovered model list for the authenticated Tools bearer token
  • gives the Chrome extension a dynamic model dropdown without calling OpenAI directly from the extension
  • reuses the same provider key resolution rules as the rest of Tools (personal OpenAI key if present, otherwise global)

Response fields include:

  • models — array of available model options
  • default_model — effective default model for this user/context
  • source — whether the list came from live provider discovery or a configured fallback
  • warning — optional fallback/discovery message