GuardOS Docs
Getting Started
Security and Data Privacy
Getting Started
Security and Data Privacy
  • Technical Documentation

    • Getting Started
    • Security and Data Privacy
    • API Keys
    • Usage Metrics
    • Private Endpoint
    • Document Intelligence

Private Endpoint

Overview

Private Endpoint offers secure, FADP-compliant access to powerful Large Language Models (LLMs) hosted exclusively in Switzerland — ensuring data privacy and regulatory peace of mind.

Security and Data Privacy

GuardOS is committed to ensuring the security and privacy of your data. See Security and Data Privacy for more information.

Pricing

Pricing is based on expected volume. Please contact us for a quote.

Endpoints

MethodEndpoint
POSThttps://api.guardos.ai/api/v1/private-endpoint

POST: /api/v1/private-endpoint

Request Headers

interface PrivateEndpointRequest {
	'Content-Type': 'application/json'
	'x-api-key': string
}
Header NameTypeDescription
Content-Typestringapplication/json
x-api-keystringYour API key

Request Body

interface PrivateEndpointRequest {
	messages: {
		role: 'user' | 'assistant'
		content: string
	}[]
	settings?: {
		model?: string // Default: deepseek-v3-0324
		stream?: boolean // Default: false
		temperature?: number // Default: 0
		top_p?: number // Default: 1
		top_k?: number // Default: 0
		frequency_penalty?: number // Default: 0
		presence_penalty?: number // Default: 0
		system?: string // Default: You are a helpful assistant.
	}
}
Field NameTypeDescriptionDefault Value
messagesMessage[]Array of messages
settingsSettingsOptional settings
modelstringModel identifier to usedeepseek-v3-0324
streambooleanStream responsefalse
temperaturenumberTemperature0
top_pnumberTop P1
top_knumberTop K0
frequency_penaltynumberFrequency penalty0
presence_penaltynumberPresence penalty0
systemstringSystem promptYou are a helpful assistant.

Success Response

interface PrivateEndpointResponse {
	text: string
	files: []
	reasoningDetails: []
	toolCalls: []
	toolResults: []
	finishReason: 'stop'
	warnings: []
	sources: []
}
Field NameTypeDescription
textstringResponse text of the model
files[]Files generated by the model
reasoningDetails[]Reasoning details
toolCalls[]Tool calls
toolResults[]Tool results
finishReasonstringFinish reason
warnings[]Warnings
sources[]Sources

Error Response

interface PrivateEndpointErrorResponse {
	statusCode: number
	error: string
	message: string
}
Field NameTypeDescription
statusCodenumberError code
errorstringError message
messagestringError message

Examples

Basic Request

For a basic request, you only need to provide the API key and the messages object.

curl
curl -X POST 'https://api.guardos.ai/api/v1/private-endpoint' \
-H 'Content-Type: application/json' \
-H 'x-api-key: YOUR_API_KEY' \
-d '{
  "messages": [
    { "role": "user", "content": "Hello, how are you?" }
  ]
}'
TypeScript
const response = await fetch('https://api.guardos.ai/api/v1/private-endpoint', 
{
    method: 'POST',
    headers: { 
        'Content-Type': 'application/json', 
        'x-api-key': 'YOUR_API_KEY' 
    },
    body: JSON.stringify({ 
        messages: [{ role: "user", content: "Hello, how are you?" }],
    }),
});
const data = await response.json();
Python
import requests
import json

response = requests.post(
    'https://api.guardos.ai/api/v1/private-endpoint',
    headers={'Content-Type': 'application/json', 'x-api-key': 'YOUR_API_KEY'},
    data=json.dumps({'messages': [{'role': 'user', 'content': 'Hello, how are you?'}]})
)
data = response.json()

Basic Streaming Request

For a basic streaming request set settings.stream = true.

curl
curl --no-buffer -X POST https://api.guardos.ai/api/v1/private-endpoint \
  -H "Content-Type: application/json" \
  -H "x-api-key: YOUR_API_KEY" \
  -d '{
    "messages": [{"role": "user", "content": "Hello"}],
    "settings": {
        "stream": true
    }
  }'
TypeScript
const response = await fetch('https://api.guardos.ai/api/v1/private-endpoint', 
{
    method: 'POST',
    headers: { 
        'Content-Type': 'application/json', 
        'x-api-key': 'YOUR_API_KEY' 
    },
    body: JSON.stringify({ 
        messages: [{ role: "user", content: "Hello, how are you?" }],
        settings: {
            stream: true // <-- Enable streaming
        }
    }),
});

const reader = response.body.getReader();
const decoder = new TextDecoder('utf-8');

while (true) {
    const { done, value } = await reader.read();
    if (done) break;
    const chunk = decoder.decode(value, { stream: true });
    console.log(chunk); // Or parse/accumulate it as needed
}

Python
import httpx
import asyncio
import json

async def main():
    url = 'https://api.guardos.ai/api/v1/private-endpoint'
    headers = {
        'Content-Type': 'application/json',
        'x-api-key': 'YOUR_API_KEY',
    }
    data = {
        "messages": [{"role": "user", "content": "Hello"}],
        "settings": {
            "stream": True, # <-- Enable streaming
        }
    }

    async with httpx.AsyncClient() as client:
        async with client.stream("POST", url, headers=headers, json=data) as response:
            async for chunk in response.aiter_text():
                print(chunk, end='')

asyncio.run(main())

Custom system prompt

You can set a custom system prompt by setting settings.system to your desired system prompt.

curl
curl -X POST 'https://api.guardos.ai/api/v1/private-endpoint' \
-H 'Content-Type: application/json' \
-H 'x-api-key: YOUR_API_KEY' \
-d '{
  "messages": [
    { "role": "user", "content": "Hello, how are you?" }
  ],
  "settings": {
    "system": "You are to only respond in German."
  }
}'
TypeScript
const response = await fetch('https://api.guardos.ai/api/v1/private-endpoint', 
{
    method: 'POST',
    headers: { 
        'Content-Type': 'application/json', 
        'x-api-key': 'YOUR_API_KEY' 
    },
    body: JSON.stringify({ 
        messages: [{ role: "user", content: "Hello, how are you?" }],
        settings: {
            system: "You are to only respond in German." // <-- Set custom system prompt
        }
    }),
});
const data = await response.json();
Python
import requests
import json

response = requests.post(
    'https://api.guardos.ai/api/v1/private-endpoint',
    headers={'Content-Type': 'application/json', 'x-api-key': 'YOUR_API_KEY'},
    data=json.dumps({
        'messages': [{'role': 'user', 'content': 'Hello, how are you?'}],
        'settings': {
            'system': "You are to only respond in German." # <-- Set custom system prompt   
        }
    })
)
data = response.json()

Custom model settings

With each request you can set custom model settings. For example you can set the temperature to control the randomness of the model's responses. A low value will make the model more deterministic, while a high value will make the model more creative.

  • Range: 0 - 1.0
  • Default: 0
curl
curl -X POST 'https://api.guardos.ai/api/v1/private-endpoint' \
-H 'Content-Type: application/json' \
-H 'x-api-key: YOUR_API_KEY' \
-d '{
  "messages": [
    { "role": "user", "content": "Hello, how are you?" }
  ],
  "settings": {
    "temperature": 0.5
  }
}'
TypeScript
const response = await fetch('https://api.guardos.ai/api/v1/private-endpoint', 
{
    method: 'POST',
    headers: { 
        'Content-Type': 'application/json', 
        'x-api-key': 'YOUR_API_KEY' 
    },
    body: JSON.stringify({ 
        messages: [{ role: "user", content: "Hello, how are you?" }],
        settings: {
            temperature: 0.5 // <-- Set custom temperature
        }
    }),
});
const data = await response.json();
Python
import requests
import json

response = requests.post(
    'https://api.guardos.ai/api/v1/private-endpoint',
    headers={'Content-Type': 'application/json', 'x-api-key': 'YOUR_API_KEY'},
    data=json.dumps({
        'messages': [{'role': 'user', 'content': 'Hello, how are you?'}],
        'settings': {
            'temperature': 0.5 # <-- Set custom temperature
        }
    })
)
data = response.json()
Prev
Usage Metrics
Next
Document Intelligence