Scaling azure for outgoing http requests
This presentation is the property of its rightful owner.
Sponsored Links
1 / 13

Scaling Azure for Outgoing HTTP Requests PowerPoint PPT Presentation


  • 108 Views
  • Uploaded on
  • Presentation posted in: General

Scaling Azure for Outgoing HTTP Requests. Scaling to thousands of requests / sec. Gergely Orosz @ GergelyOrosz. The Problem. Implementation. Windows Server 2012 C# HttpClient GET & POST. Implementation. >~1000 req / sec. 4 DCs, 12 VMs each. Network Errors. Network Errors.

Download Presentation

Scaling Azure for Outgoing HTTP Requests

An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -

Presentation Transcript


Scaling azure for outgoing http requests

Scaling Azure for Outgoing HTTP Requests

Scaling to thousands of requests / sec

GergelyOrosz

@GergelyOrosz


The problem

The Problem


Implementation

Implementation

  • Windows Server 2012

  • C#

  • HttpClientGET & POST


Implementation1

Implementation

>~1000 req / sec

4 DCs,12 VMs each

NetworkErrors


Network errors

Network Errors

  • System.Net.WebException: The underlying connection was closed: An unexpected error occurred on a send.

  • Sympthom of Port Exhaustion

  • Let’s scale up!


Network errors1

Network Errors

  • After some more research…

  • Two Limits

    • Per deployment: ~350 req / sec

    • Per VM: ~20 req / sec


What s going on

What’s Going On?

  • The VM Limit of ~17 / sec

  • netstat

    • lots of TIME_WAIT connections

  • TCP

    • TCP close sequence

  • Windows default TCP configuration

    • TIME_WAIT = 4 minutes

    • Ports to use: 1024 – 5000

    • 4000 / (4 * 60) = 16.66


What s going on1

What’s going on?

  • The VM Limit of ~17 / sec

  • Port exhaustion

    • Change default Windows configurations

      • Decrease TIME_WAIT

      • Or increase number of ports to use

    • Increase number of ports

      • netshint ipv4 set dynamicporttcp start=1025 num=64511

      • New limit: 265 req / sec / VM


What s going on2

What’s going on?

2. The Deployment Limit of ~350 req / sec

  • NAT

    • TIME_WAIT in Azure is 180 seconds

    • 65K ports

    • 65,000 / 180 = 360

  • Need to add more deployments to scale beyond this limit


Some other findings

Some other findings

  • Choice of the .NET client library

    • Success rate under heavy load

      • HttpClient – 78%

      • Shared HttpClient – 98.5%

      • WebRequest – 99.5%

    • Use WebRequest when possible

      • Or share HttpClient instances


Some other findings1

Some other findings

  • Tuning of IIS

    • A configuration that worked better than the default one

      • dynamic port allocation: 64K

      • appConcurrencyLimit: 750K

      • queueLength: 65K

      • minWorkerThreads: 5K, maxWorkerThreads: 10K

      • minIoThreads: 500, maxIoThreads: 1000

      • requestQueueLimit: 750K

      • connectionTimeout: 3 minute

  • Increased successful responses by 30%


Summary

Summary

  • Know how TCP & NAT works

    • Increase number of dynamic ports on VMs to scale beyond 17 req / sec

    • Increase number of deployments to scale beyond 350 req / sec

  • Use WebRequest or shared HttpClient

  • Tune IIS for performance

  • You’re now ready to scale 


Thank you

Thank You

@GergelyOrosz

Visualtini


  • Login