KoreShield

Getting Started

Secure your first LLM integration with KoreShield

Getting Started

This guide will walk you through setting up KoreShield to protect your first LLM integration. We'll configure the security proxy and secure API calls to OpenAI.

Quick Start

1. Install KoreShield

pip install koreshield

2. Configure Your LLM Provider

Create a configuration file config.yaml:

# KoreShield Configuration
providers:
  openai:
    api_key: "sk-your-openai-api-key-here"
    base_url: "https://api.openai.com/v1"

security:
  level: "medium"  # low, medium, high
  log_level: "info"

server:
  host: "localhost"
  port: 8000

3. Start the Security Proxy

koreshield start --config config.yaml

KoreShield will start on http://localhost:8000 and proxy requests to your configured LLM providers with security scanning.

4. Update Your Application

Instead of calling OpenAI directly, route requests through KoreShield:

# Before (direct OpenAI call)
import openai
client = openai.OpenAI(api_key="sk-your-key")

# After (secured through KoreShield)
import openai
client = openai.OpenAI(
    api_key="sk-your-key",
    base_url="http://localhost:8000/v1"  # KoreShield proxy
)

That's it! Your LLM calls are now protected against prompt injection attacks.

What Just Happened

When you make an API call through KoreShield:

  1. Input Sanitization: Your prompt is cleaned and normalized
  2. Attack Detection: KoreShield scans for prompt injection patterns
  3. Policy Enforcement: Based on security level, suspicious requests may be blocked or flagged
  4. Audit Logging: All requests and security decisions are logged
  5. Forward to LLM: Safe requests are forwarded to the actual LLM provider

Next Steps

On this page