Getting Started
Secure your first LLM integration with KoreShield
Getting Started
This guide will walk you through setting up KoreShield to protect your first LLM integration. We'll configure the security proxy and secure API calls to OpenAI.
Quick Start
1. Install KoreShield
pip install koreshield2. Configure Your LLM Provider
Create a configuration file config.yaml:
# KoreShield Configuration
providers:
openai:
api_key: "sk-your-openai-api-key-here"
base_url: "https://api.openai.com/v1"
security:
level: "medium" # low, medium, high
log_level: "info"
server:
host: "localhost"
port: 80003. Start the Security Proxy
koreshield start --config config.yamlKoreShield will start on http://localhost:8000 and proxy requests to your configured LLM providers with security scanning.
4. Update Your Application
Instead of calling OpenAI directly, route requests through KoreShield:
# Before (direct OpenAI call)
import openai
client = openai.OpenAI(api_key="sk-your-key")
# After (secured through KoreShield)
import openai
client = openai.OpenAI(
api_key="sk-your-key",
base_url="http://localhost:8000/v1" # KoreShield proxy
)That's it! Your LLM calls are now protected against prompt injection attacks.
What Just Happened
When you make an API call through KoreShield:
- Input Sanitization: Your prompt is cleaned and normalized
- Attack Detection: KoreShield scans for prompt injection patterns
- Policy Enforcement: Based on security level, suspicious requests may be blocked or flagged
- Audit Logging: All requests and security decisions are logged
- Forward to LLM: Safe requests are forwarded to the actual LLM provider