Long Polling: The Patient Connection
Understanding long polling - an efficient approach where the server holds requests open until data is available.
Best viewed on desktop for optimal interactive experience
Long Polling
Long polling represents a significant improvement over short polling by holding the connection open until data is available. Like a patient waiter at a restaurant who stands by your table until you're ready to order, long polling eliminates the constant back-and-forth of traditional polling.
Interactive Demonstration
Watch how long polling holds connections open and delivers data with minimal latency:
Long Polling Controls
Long Polling Visualization
Activity Log
Long Polling Advantages:
- Near real-time data delivery (latency ~0ms after event occurs)
- No wasted requests - only reconnects after receiving data
- Server controls timing - sends data when available
- Works through firewalls and proxies (standard HTTP)
- Graceful degradation with timeouts
How Long Polling Works
The Connection Lifecycle
Client Server | | |-------- GET /api/data ------->| | | (Hold connection) | Connection held | (Waiting for event...) | (10 seconds...) | | | EVENT OCCURS! |<------- 200 OK (data) ---------| | | |-------- GET /api/data ------->| (Immediate reconnect) | | (Hold connection) | Connection held | (Waiting for event...) | (3 seconds...) | | | EVENT OCCURS! |<------- 200 OK (data) ---------| | | ... (continues) ...
Implementation Pattern
async function longPoll(url) { while (true) { try { const response = await fetch(url, { // Long timeout to hold connection signal: AbortSignal.timeout(30000) }); const data = await response.json(); handleData(data); // Immediate reconnect continue; } catch (error) { if (error.name === 'AbortError') { // Timeout - reconnect immediately continue; } // Other error - wait before retry await new Promise(r => setTimeout(r, 1000)); } } }
Mathematical Analysis
Latency Characteristics
Where:
- Tevent = Time when server event occurs
- Tconnection = Time when connection was established
Key insight: Latency approaches zero for events occurring while connection is held.
Connection Efficiency
Long polling typically achieves:
- 95-100% efficiency (vs 10-30% for short polling)
- Each connection either returns data or times out
- No wasted "empty" responses
Bandwidth Comparison
For 100 events over 1 hour:
Method | Requests | Data Transferred | Overhead |
---|---|---|---|
Short Polling (3s) | 1,200 | 1.2 MB | 92% |
Long Polling | ~100 | 100 KB | 8% |
Improvement | 12× fewer | 12× less | 11× reduction |
Server-Side Implementation
Basic Server Pattern
// Express.js example app.get('/api/long-poll', async (req, res) => { const timeout = 30000; // 30 seconds const startTime = Date.now(); // Check for data with timeout while (Date.now() - startTime < timeout) { const data = await checkForData(req.user); if (data) { return res.json(data); } // Small delay to prevent CPU spinning await new Promise(r => setTimeout(r, 100)); } // Timeout - send empty response res.status(204).end(); });
Event-Driven Pattern
class LongPollManager { constructor() { this.waitingClients = new Map(); } addClient(userId, response) { this.waitingClients.set(userId, { response, timestamp: Date.now() }); // Set timeout setTimeout(() => { if (this.waitingClients.has(userId)) { response.status(204).end(); this.waitingClients.delete(userId); } }, 30000); } sendToClient(userId, data) { const client = this.waitingClients.get(userId); if (client) { client.response.json(data); this.waitingClients.delete(userId); return true; } return false; } broadcast(data) { for (const [userId, client] of this.waitingClients) { client.response.json(data); } this.waitingClients.clear(); } }
Scaling Challenges
Connection Limits
Each held connection consumes server resources:
Typical limits:
- Node.js: ~10,000 connections per process
- Apache: ~150-250 connections (prefork)
- Nginx: ~10,000+ connections
Resource Consumption
// Memory per connection const connectionMemory = { tcpSocket: 8192, // 8KB TCP buffer httpParser: 2048, // 2KB HTTP state appState: 1024, // 1KB application data total: 11264 // ~11KB per connection }; // For 10,000 connections const totalMemory = 10000 * 11.264; // ~110 MB
Optimization Strategies
1. Connection Pooling
Reuse connections across multiple clients:
class ConnectionPool { constructor(maxSize = 100) { this.pool = []; this.waiting = []; this.maxSize = maxSize; } async acquire() { if (this.pool.length > 0) { return this.pool.pop(); } if (this.pool.length + this.waiting.length < this.maxSize) { return this.createConnection(); } // Wait for available connection return new Promise(resolve => { this.waiting.push(resolve); }); } release(connection) { if (this.waiting.length > 0) { const resolve = this.waiting.shift(); resolve(connection); } else { this.pool.push(connection); } } }
2. Smart Timeout Management
Adjust timeouts based on activity patterns:
class AdaptiveTimeout { constructor() { this.baseTimeout = 10000; this.maxTimeout = 60000; this.activityHistory = []; } getTimeout() { if (this.activityHistory.length < 10) { return this.baseTimeout; } // Calculate average interval between events const intervals = []; for (let i = 1; i < this.activityHistory.length; i++) { intervals.push( this.activityHistory[i] - this.activityHistory[i-1] ); } const avgInterval = intervals.reduce((a, b) => a + b) / intervals.length; // Set timeout to 2x average interval return Math.min(avgInterval * 2, this.maxTimeout); } recordActivity() { this.activityHistory.push(Date.now()); // Keep only last 20 events if (this.activityHistory.length > 20) { this.activityHistory.shift(); } } }
3. Message Queuing
Decouple event generation from delivery:
class MessageQueue { constructor() { this.queues = new Map(); // userId -> messages[] } enqueue(userId, message) { if (!this.queues.has(userId)) { this.queues.set(userId, []); } this.queues.get(userId).push(message); } dequeue(userId) { const messages = this.queues.get(userId) || []; this.queues.delete(userId); return messages; } hasMessages(userId) { return this.queues.has(userId) && this.queues.get(userId).length > 0; } }
Handling Edge Cases
1. Network Interruptions
class RobustLongPoller { constructor(url) { this.url = url; this.retryCount = 0; this.maxRetries = 3; } async poll() { try { const response = await fetch(this.url); this.retryCount = 0; // Reset on success return await response.json(); } catch (error) { if (this.retryCount < this.maxRetries) { this.retryCount++; await this.backoff(); return this.poll(); } throw error; } } async backoff() { const delay = Math.min(1000 * Math.pow(2, this.retryCount), 10000); await new Promise(r => setTimeout(r, delay)); } }
2. Duplicate Prevention
class DuplicateHandler { constructor() { this.lastMessageId = null; } async longPoll(url) { const response = await fetch(url, { headers: { 'Last-Message-Id': this.lastMessageId || '' } }); const data = await response.json(); if (data.messageId === this.lastMessageId) { // Duplicate - ignore return null; } this.lastMessageId = data.messageId; return data; } }
Comparison with Alternatives
vs Short Polling
Aspect | Short Polling | Long Polling |
---|---|---|
Latency | 0 - interval | ~0ms |
Requests/hour | 1200 (3s interval) | 100-200 |
Efficiency | 10-30% | 95-100% |
Server Load | High | Medium |
Implementation | Simple | Medium |
vs WebSockets
Aspect | Long Polling | WebSockets |
---|---|---|
Protocol | HTTP | WS/WSS |
Direction | Client-initiated | Bidirectional |
Firewall Friendly | Yes | Sometimes |
Proxy Support | Yes | Limited |
Fallback Needed | No | Yes |
Real-World Applications
Good Use Cases ✅
-
Chat Applications
- Facebook Messenger (fallback)
- WhatsApp Web (fallback)
- Slack (fallback)
-
Notifications
- Gmail web interface
- Twitter notifications
- GitHub PR updates
-
Live Feeds
- News updates
- Stock tickers (moderate frequency)
- Social media feeds
-
Collaborative Tools
- Document collaboration (fallback)
- Shared whiteboards (fallback)
- Project management tools
Poor Use Cases ❌
-
High-Frequency Updates
- Real-time gaming
- Video streaming
- High-frequency trading
-
Bidirectional Communication
- Video calls
- Screen sharing
- Remote desktop
Monitoring and Debugging
Key Metrics
class LongPollMetrics { constructor() { this.connections = 0; this.activeConnections = 0; this.dataDelivered = 0; this.timeouts = 0; this.errors = 0; this.waitTimes = []; } recordConnection() { this.connections++; this.activeConnections++; } recordDisconnection(hadData, waitTime) { this.activeConnections--; if (hadData) { this.dataDelivered++; this.waitTimes.push(waitTime); } else { this.timeouts++; } } getStats() { return { totalConnections: this.connections, activeConnections: this.activeConnections, efficiency: (this.dataDelivered / this.connections * 100).toFixed(1) + '%', avgWaitTime: this.waitTimes.reduce((a, b) => a + b, 0) / this.waitTimes.length, timeoutRate: (this.timeouts / this.connections * 100).toFixed(1) + '%' }; } }
Migration Path
When to upgrade from long polling:
- To WebSockets: When you need true bidirectional communication
- To Server-Sent Events: For one-way server push with auto-reconnect
- To HTTP/2 Server Push: When you control the infrastructure
- To WebRTC: For peer-to-peer real-time communication
Related Concepts
- Short Polling - The simpler alternative
- WebSockets - Full duplex communication
- Protocol Comparison - Compare all approaches
- Server-Sent Events - One-way server push
Conclusion
Long polling strikes an excellent balance between simplicity and efficiency. It provides near real-time updates while maintaining HTTP compatibility, making it ideal for applications that need timely updates but don't require the complexity of WebSockets. Its ability to work through firewalls and proxies makes it a reliable fallback option for more advanced protocols.