Long Polling: The Patient Connection

Understanding long polling - an efficient approach where the server holds requests open until data is available.

Best viewed on desktop for optimal interactive experience

Long Polling

Long polling represents a significant improvement over short polling by holding the connection open until data is available. Like a patient waiter at a restaurant who stands by your table until you're ready to order, long polling eliminates the constant back-and-forth of traditional polling.

Interactive Demonstration

Watch how long polling holds connections open and delivers data with minimal latency:

Long Polling Controls

Long Polling Visualization

CLIENT
SERVER
Connections
0
Events Received
0
Avg Wait Time
0.0s
Efficiency
0%

Activity Log

Long Polling Advantages:

  • Near real-time data delivery (latency ~0ms after event occurs)
  • No wasted requests - only reconnects after receiving data
  • Server controls timing - sends data when available
  • Works through firewalls and proxies (standard HTTP)
  • Graceful degradation with timeouts

How Long Polling Works

The Connection Lifecycle

Client Server | | |-------- GET /api/data ------->| | | (Hold connection) | Connection held | (Waiting for event...) | (10 seconds...) | | | EVENT OCCURS! |<------- 200 OK (data) ---------| | | |-------- GET /api/data ------->| (Immediate reconnect) | | (Hold connection) | Connection held | (Waiting for event...) | (3 seconds...) | | | EVENT OCCURS! |<------- 200 OK (data) ---------| | | ... (continues) ...

Implementation Pattern

async function longPoll(url) { while (true) { try { const response = await fetch(url, { // Long timeout to hold connection signal: AbortSignal.timeout(30000) }); const data = await response.json(); handleData(data); // Immediate reconnect continue; } catch (error) { if (error.name === 'AbortError') { // Timeout - reconnect immediately continue; } // Other error - wait before retry await new Promise(r => setTimeout(r, 1000)); } } }

Mathematical Analysis

Latency Characteristics

Latencyactual = Tevent - Tconnection

Where:

  • Tevent = Time when server event occurs
  • Tconnection = Time when connection was established

Key insight: Latency approaches zero for events occurring while connection is held.

Connection Efficiency

Efficiency = ConnectionswithDataConnectionstotal × 100\%

Long polling typically achieves:

  • 95-100% efficiency (vs 10-30% for short polling)
  • Each connection either returns data or times out
  • No wasted "empty" responses

Bandwidth Comparison

For 100 events over 1 hour:

MethodRequestsData TransferredOverhead
Short Polling (3s)1,2001.2 MB92%
Long Polling~100100 KB8%
Improvement12× fewer12× less11× reduction

Server-Side Implementation

Basic Server Pattern

// Express.js example app.get('/api/long-poll', async (req, res) => { const timeout = 30000; // 30 seconds const startTime = Date.now(); // Check for data with timeout while (Date.now() - startTime < timeout) { const data = await checkForData(req.user); if (data) { return res.json(data); } // Small delay to prevent CPU spinning await new Promise(r => setTimeout(r, 100)); } // Timeout - send empty response res.status(204).end(); });

Event-Driven Pattern

class LongPollManager { constructor() { this.waitingClients = new Map(); } addClient(userId, response) { this.waitingClients.set(userId, { response, timestamp: Date.now() }); // Set timeout setTimeout(() => { if (this.waitingClients.has(userId)) { response.status(204).end(); this.waitingClients.delete(userId); } }, 30000); } sendToClient(userId, data) { const client = this.waitingClients.get(userId); if (client) { client.response.json(data); this.waitingClients.delete(userId); return true; } return false; } broadcast(data) { for (const [userId, client] of this.waitingClients) { client.response.json(data); } this.waitingClients.clear(); } }

Scaling Challenges

Connection Limits

Each held connection consumes server resources:

MaxClients = ServerConnections - OverheadConnectionsPerClient

Typical limits:

  • Node.js: ~10,000 connections per process
  • Apache: ~150-250 connections (prefork)
  • Nginx: ~10,000+ connections

Resource Consumption

// Memory per connection const connectionMemory = { tcpSocket: 8192, // 8KB TCP buffer httpParser: 2048, // 2KB HTTP state appState: 1024, // 1KB application data total: 11264 // ~11KB per connection }; // For 10,000 connections const totalMemory = 10000 * 11.264; // ~110 MB

Optimization Strategies

1. Connection Pooling

Reuse connections across multiple clients:

class ConnectionPool { constructor(maxSize = 100) { this.pool = []; this.waiting = []; this.maxSize = maxSize; } async acquire() { if (this.pool.length > 0) { return this.pool.pop(); } if (this.pool.length + this.waiting.length < this.maxSize) { return this.createConnection(); } // Wait for available connection return new Promise(resolve => { this.waiting.push(resolve); }); } release(connection) { if (this.waiting.length > 0) { const resolve = this.waiting.shift(); resolve(connection); } else { this.pool.push(connection); } } }

2. Smart Timeout Management

Adjust timeouts based on activity patterns:

class AdaptiveTimeout { constructor() { this.baseTimeout = 10000; this.maxTimeout = 60000; this.activityHistory = []; } getTimeout() { if (this.activityHistory.length < 10) { return this.baseTimeout; } // Calculate average interval between events const intervals = []; for (let i = 1; i < this.activityHistory.length; i++) { intervals.push( this.activityHistory[i] - this.activityHistory[i-1] ); } const avgInterval = intervals.reduce((a, b) => a + b) / intervals.length; // Set timeout to 2x average interval return Math.min(avgInterval * 2, this.maxTimeout); } recordActivity() { this.activityHistory.push(Date.now()); // Keep only last 20 events if (this.activityHistory.length > 20) { this.activityHistory.shift(); } } }

3. Message Queuing

Decouple event generation from delivery:

class MessageQueue { constructor() { this.queues = new Map(); // userId -> messages[] } enqueue(userId, message) { if (!this.queues.has(userId)) { this.queues.set(userId, []); } this.queues.get(userId).push(message); } dequeue(userId) { const messages = this.queues.get(userId) || []; this.queues.delete(userId); return messages; } hasMessages(userId) { return this.queues.has(userId) && this.queues.get(userId).length > 0; } }

Handling Edge Cases

1. Network Interruptions

class RobustLongPoller { constructor(url) { this.url = url; this.retryCount = 0; this.maxRetries = 3; } async poll() { try { const response = await fetch(this.url); this.retryCount = 0; // Reset on success return await response.json(); } catch (error) { if (this.retryCount < this.maxRetries) { this.retryCount++; await this.backoff(); return this.poll(); } throw error; } } async backoff() { const delay = Math.min(1000 * Math.pow(2, this.retryCount), 10000); await new Promise(r => setTimeout(r, delay)); } }

2. Duplicate Prevention

class DuplicateHandler { constructor() { this.lastMessageId = null; } async longPoll(url) { const response = await fetch(url, { headers: { 'Last-Message-Id': this.lastMessageId || '' } }); const data = await response.json(); if (data.messageId === this.lastMessageId) { // Duplicate - ignore return null; } this.lastMessageId = data.messageId; return data; } }

Comparison with Alternatives

vs Short Polling

AspectShort PollingLong Polling
Latency0 - interval~0ms
Requests/hour1200 (3s interval)100-200
Efficiency10-30%95-100%
Server LoadHighMedium
ImplementationSimpleMedium

vs WebSockets

AspectLong PollingWebSockets
ProtocolHTTPWS/WSS
DirectionClient-initiatedBidirectional
Firewall FriendlyYesSometimes
Proxy SupportYesLimited
Fallback NeededNoYes

Real-World Applications

Good Use Cases ✅

  1. Chat Applications

    • Facebook Messenger (fallback)
    • WhatsApp Web (fallback)
    • Slack (fallback)
  2. Notifications

    • Gmail web interface
    • Twitter notifications
    • GitHub PR updates
  3. Live Feeds

    • News updates
    • Stock tickers (moderate frequency)
    • Social media feeds
  4. Collaborative Tools

    • Document collaboration (fallback)
    • Shared whiteboards (fallback)
    • Project management tools

Poor Use Cases ❌

  1. High-Frequency Updates

    • Real-time gaming
    • Video streaming
    • High-frequency trading
  2. Bidirectional Communication

    • Video calls
    • Screen sharing
    • Remote desktop

Monitoring and Debugging

Key Metrics

class LongPollMetrics { constructor() { this.connections = 0; this.activeConnections = 0; this.dataDelivered = 0; this.timeouts = 0; this.errors = 0; this.waitTimes = []; } recordConnection() { this.connections++; this.activeConnections++; } recordDisconnection(hadData, waitTime) { this.activeConnections--; if (hadData) { this.dataDelivered++; this.waitTimes.push(waitTime); } else { this.timeouts++; } } getStats() { return { totalConnections: this.connections, activeConnections: this.activeConnections, efficiency: (this.dataDelivered / this.connections * 100).toFixed(1) + '%', avgWaitTime: this.waitTimes.reduce((a, b) => a + b, 0) / this.waitTimes.length, timeoutRate: (this.timeouts / this.connections * 100).toFixed(1) + '%' }; } }

Migration Path

When to upgrade from long polling:

  1. To WebSockets: When you need true bidirectional communication
  2. To Server-Sent Events: For one-way server push with auto-reconnect
  3. To HTTP/2 Server Push: When you control the infrastructure
  4. To WebRTC: For peer-to-peer real-time communication

Conclusion

Long polling strikes an excellent balance between simplicity and efficiency. It provides near real-time updates while maintaining HTTP compatibility, making it ideal for applications that need timely updates but don't require the complexity of WebSockets. Its ability to work through firewalls and proxies makes it a reliable fallback option for more advanced protocols.

If you found this explanation helpful, consider sharing it with others.

Mastodon