Multithreaded HTTP Proxy Server

2025 C | POSIX APIs

A high-performance proxy server with concurrent request handling, LRU caching, and advanced thread synchronization.

(What this project does)

(Why this project)

Most backend systems hide concurrency behind frameworks. This project intentionally avoids abstractions and works directly with:

sockets
pthreads
semaphores
mutexes

It helped understand how concurrency, synchronization, and caching affect performance and correctness in real servers.

(High-level architecture)

(Request handling flow)

(Synchronization primitives)

pthread_mutex_t

Protects shared data structures like the request queue and cache

sem_t empty

Tracks available slots in the request queue

sem_t full

Tracks the number of pending requests

This design prevents race conditions, busy waiting, and unbounded thread creation

(Why data is handled in chunks)

HTTP responses are received over TCP, which is a stream-based protocol. There is no guarantee that the entire response will arrive in a single read.

Reading data in chunks:

  • Ensures correctness for partial reads
  • Avoids large memory allocations
  • Supports responses of unknown or large size
  • Allows streaming data directly to the client

This is how real proxy servers forward data efficiently.

(Cache design - LRU)

This significantly reduces response time for repeated requests.

(Performance results)

Cache latency comparison

This shows a large improvement when responses are served directly from memory.

Load testing

The proxy was tested using ApacheBench:

Total requests 50
Concurrency level 5
Requests per second ~800+
Mean latency ~1–2 ms

The server remained stable under concurrent load.