Short Polling: The Impatient Client Pattern

Understanding short polling - a simple but inefficient approach to fetching data at regular intervals.

Best viewed on desktop for optimal interactive experience

Short Polling

Short polling is the simplest approach to checking for updates from a server - like a child repeatedly asking "Are we there yet?" on a long car journey. While straightforward to implement, it's often inefficient and wasteful of resources.

Interactive Demonstration

Watch how short polling repeatedly sends requests regardless of whether new data is available:

Polling Controls

Short Polling Visualization

CLIENT
SERVER
Polling every 3s
Total Requests
0
Data Received
0
Empty Responses
0
Efficiency
0%

Activity Log

Inefficiencies Detected:

  • Making 0 requests consumed ~0KB of bandwidth
  • 0 requests returned no new data (wasted resources)
  • Average latency: 1500ms (half the polling interval)
  • Server processing 100.0% unnecessary requests

How Short Polling Works

The Request Cycle

Client Server | | |-------- GET /api/data ------->| | | (Process immediately) |<------- 200 OK (data) --------| | | | Wait 3 seconds | | | |-------- GET /api/data ------->| | | (Process immediately) |<------ 204 No Content ---------| | | | Wait 3 seconds | | | ... (repeats indefinitely) ...

Implementation Pattern

function shortPoll(url, interval = 3000) { setInterval(async () => { try { const response = await fetch(url); const data = await response.json(); if (data.hasUpdates) { handleUpdate(data); } } catch (error) { console.error('Polling error:', error); } }, interval); }

Mathematical Analysis

Request Frequency

Requestshour = 3600intervalseconds

For common intervals:

  • 1 second: 3,600 requests/hour
  • 3 seconds: 1,200 requests/hour
  • 5 seconds: 720 requests/hour
  • 10 seconds: 360 requests/hour

Bandwidth Overhead

Overhead = N × (Hrequest + Hresponse)

Where:

  • N = Number of requests
  • Hrequest = Request headers (~700 bytes)
  • Hresponse = Response headers (~300 bytes)

Example calculation for 3-second polling over 1 hour:

Requests: 1,200 Headers per request: ~1KB Total overhead: 1.2MB (just for headers!)

Average Latency

Latencyavg = interval2

The average time to discover new data is half the polling interval:

  • 1s interval → 0.5s average latency
  • 3s interval → 1.5s average latency
  • 10s interval → 5s average latency

Efficiency Problems

The Waste Factor

Most polling requests return empty responses:

Efficiency = UsefulrequestsTotalrequests × 100\%

In typical scenarios:

  • 70-90% of requests return no new data
  • Each empty response still consumes bandwidth
  • Server must process every request

Server Load Impact

For 1,000 concurrent clients polling every 3 seconds:

Requests per second: 1,000 / 3 = 333 req/s Daily requests: 333 × 86,400 = 28.8 million Monthly requests: 864 million

Network Overhead Breakdown

ComponentSizeFrequencyDaily Total
HTTP Request Headers~700BEvery 3s20.2 MB
HTTP Response Headers~300BEvery 3s8.6 MB
Empty Response Body~50B70% of requests1.0 MB
Actual Data~500B30% of requests4.3 MB
Total Traffic34.1 MB
Useful Data4.3 MB (12.6%)

When Short Polling Makes Sense

Good Use Cases ✅

  1. Simple Status Checks

    • Server health monitoring
    • Build status updates
    • Queue length checking
  2. Infrequent Updates

    • Weather data (10-30 minute intervals)
    • Stock prices (delayed quotes)
    • News feed updates
  3. Stateless Operations

    • No session management needed
    • Each request is independent
    • CDN-cacheable responses
  4. Simple Implementation Requirements

    • Quick prototypes
    • Legacy system compatibility
    • Minimal client complexity

Poor Use Cases ❌

  1. Real-time Requirements

    • Live chat applications
    • Multiplayer games
    • Collaborative editing
  2. High-frequency Updates

    • Trading platforms
    • Live sports scores
    • Monitoring dashboards
  3. Resource-constrained Environments

    • Mobile applications
    • Metered connections
    • Battery-powered devices

Optimization Strategies

1. Adaptive Polling

Adjust interval based on activity:

class AdaptivePoller { constructor(url) { this.url = url; this.minInterval = 1000; this.maxInterval = 30000; this.currentInterval = this.minInterval; this.consecutiveEmpty = 0; } async poll() { const response = await fetch(this.url); const data = await response.json(); if (data.hasUpdates) { this.consecutiveEmpty = 0; this.currentInterval = this.minInterval; this.handleData(data); } else { this.consecutiveEmpty++; this.backoff(); } setTimeout(() => this.poll(), this.currentInterval); } backoff() { this.currentInterval = Math.min( this.currentInterval * 1.5, this.maxInterval ); } }

2. Request Batching

Combine multiple data checks:

// Instead of: setInterval(() => fetch('/api/messages'), 3000); setInterval(() => fetch('/api/notifications'), 3000); setInterval(() => fetch('/api/status'), 3000); // Use: setInterval(async () => { const response = await fetch('/api/batch', { method: 'POST', body: JSON.stringify({ endpoints: ['messages', 'notifications', 'status'] }) }); }, 3000);

3. Conditional Requests

Use ETags to avoid unnecessary data transfer:

let lastETag = null; async function pollWithETag() { const headers = {}; if (lastETag) { headers['If-None-Match'] = lastETag; } const response = await fetch('/api/data', { headers }); if (response.status === 304) { // No changes return; } lastETag = response.headers.get('ETag'); const data = await response.json(); handleUpdate(data); }

4. Exponential Backoff

Reduce frequency during quiet periods:

class ExponentialBackoffPoller { constructor(url) { this.url = url; this.baseInterval = 1000; this.maxInterval = 60000; this.backoffFactor = 2; this.currentInterval = this.baseInterval; } async poll() { const response = await fetch(this.url); const data = await response.json(); if (data.hasUpdates) { this.currentInterval = this.baseInterval; // Reset this.handleData(data); } else { this.currentInterval = Math.min( this.currentInterval * this.backoffFactor, this.maxInterval ); } setTimeout(() => this.poll(), this.currentInterval); } }

Common Pitfalls

1. Thundering Herd

All clients polling simultaneously can overwhelm the server:

// Bad: All clients start at the same time setInterval(poll, 3000); // Good: Add random jitter const jitter = Math.random() * 1000; setTimeout(() => { setInterval(poll, 3000); }, jitter);

2. Memory Leaks

Not clearing intervals properly:

// Bad: Interval continues after component unmount componentDidMount() { setInterval(this.poll, 3000); } // Good: Clean up properly componentDidMount() { this.interval = setInterval(this.poll, 3000); } componentWillUnmount() { clearInterval(this.interval); }

3. Error Accumulation

Continuing to poll during errors:

// Bad: Keeps polling even during network issues setInterval(async () => { try { await fetch('/api/data'); } catch (error) { // Continues polling at same rate } }, 3000); // Good: Back off on errors let errorCount = 0; setInterval(async () => { try { await fetch('/api/data'); errorCount = 0; } catch (error) { errorCount++; if (errorCount > 3) { // Switch to longer interval or stop } } }, 3000);

Monitoring and Metrics

Key Performance Indicators

  1. Hit Rate: Percentage of requests returning data
  2. Bandwidth Usage: Total bytes transferred
  3. Server Load: Requests per second
  4. Latency Distribution: Time to detect changes
  5. Error Rate: Failed requests percentage

Debug Logging

class PollingMetrics { constructor() { this.totalRequests = 0; this.dataRequests = 0; this.emptyRequests = 0; this.errors = 0; this.startTime = Date.now(); } logRequest(hasData, error = false) { this.totalRequests++; if (error) { this.errors++; } else if (hasData) { this.dataRequests++; } else { this.emptyRequests++; } } getStats() { const runtime = (Date.now() - this.startTime) / 1000; return { totalRequests: this.totalRequests, requestsPerSecond: this.totalRequests / runtime, hitRate: (this.dataRequests / this.totalRequests * 100).toFixed(1) + '%', efficiency: (this.dataRequests / this.totalRequests * 100).toFixed(1) + '%', errorRate: (this.errors / this.totalRequests * 100).toFixed(1) + '%' }; } }

Migration Path

When short polling becomes inadequate, consider:

  1. Long Polling: For near real-time updates
  2. WebSockets: For bidirectional real-time communication
  3. Server-Sent Events: For one-way server push
  4. WebRTC: For peer-to-peer communication

Conclusion

Short polling remains the simplest approach to server communication, making it ideal for prototypes and simple applications. However, its inefficiencies become apparent at scale. Understanding its limitations helps developers make informed decisions about when to use it and when to migrate to more sophisticated solutions.

If you found this explanation helpful, consider sharing it with others.

Mastodon