1 / 24

Direct3D12 and the future of graphics APIs

Direct3D12 and the future of graphics APIs. Dave Oldcorn, Direct3D12 Technical Lead, AMD. The Problem. The problem. Mismatch between existing Direct3D and hardware capabilities Lots of CPU cores, but only one stream of data State communication in small chunks “Hidden” work

abba
Download Presentation

Direct3D12 and the future of graphics APIs

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Direct3D12 and the future of graphics APIs Dave Oldcorn, Direct3D12 Technical Lead, AMD

  2. The Problem

  3. The problem • Mismatch between existing Direct3D and hardware capabilities • Lots of CPU cores, but only one stream of data • State communication in small chunks • “Hidden” work • Hard to predict from any one given call what the overhead might be • Implicit memory management • Hardware evolving away from classical register programming

  4. API landscape • Gap between PC ‘raw’ 3D APIs and the hardware has opened up • Very high level APIs now ubiquitous; easy to access even for casual developers, plenty of choice • Where the PC APIs are is a middle ground Game Engines Unreal Frostbite BlitzTech Unity CryEngine Flash / Silverlight Capability, ease of use, distance from 3D engine Application OpenGL D3D11 D3D9 D3D7/8 Opportunity Metal (register level access) Console APIs

  5. What are the Consequences?What Are the solutions?

  6. Sequential API State contributing to draw API input • Sequential API: state for given draw comes from arbitrary previous time • Some states must be reconciled on the CPU (“delayed validation”) • All contributing state needs to be visible • GPU isn’t like this, uses command buffers • Must save and restore state at start and end

  7. Threading a sequential API Application simulation • Sequential API threading • Simple producer / consumer model • Extra latency • Buffering has a cost • More threading would mean dividing tasks on finer grain • Bottlenecked on application or driver thread • Difficult to extract parallelism (Amdahl’s Law) ... Prebuild Thread 0 Prebuild Thread 1 Application Render Thread Application Driver Thread Runtime / Driver GPU Execution Queue Queued Buffer 0 Queued Buffer 1 Queued Buffer 2

  8. Command buffer API Application simulation • GPUs only listen to command buffers • Let the app build them • Command Lists, at the API level • Solves sequential API CPU issues ... Thread 1 Thread 0 Build Cmd Buffer Application Build Cmd Buffer Runtime / Driver GPU Execution Queue Queued Buffer 0 Queued Buffer 1

  9. Better scheduling • App has much more control over scheduling work • Both CPU side and GPU • Threads don’t really share much resource • Many more options for streaming assets D3D11: CB building threads tend to interfere Create thread Driver thread D3D12: CB building threads more independent Create thread Build threads GPU load still added but only after queuing Create work Render work GPU executes

  10. Pipeline objects • Pipeline objects get rid of JIT and enable LTCG for GPUs • Decouple interface and implementation • We’re aware that this is a hairpin bend for many graphics engines to negotiate. • Many engines don’t think in terms of predicting state up front • The benefits are worth it Simplified dataflow through pipeline Index Process VS ? Primitive Generation Rasteriser ? PS ? Rendertarget Output

  11. render object binding mismatch GPU Memory SRD table GPU Memory resource On-chip root table (1 per stage) • Hardware uses tables in video memory • BUT still programmed like a register solution • So one bind becomes: • Allocate a new chunk of video memory • Create a new copy of the entire table • Update the one entry • Write the register with the new table base address Pointer to (+ params of) resource Pointer to table (here, textures) SR CB Pointer to table (constant buffers)

  12. Descriptor Tables • Several tables of each type of resource • Easy to divide up by frequency • Tables can be of arbitrary size; dynamically indexed to provide bindless textures • Changing a pointer in the root table is cheap • Updating a descriptor in a table is not so cheap • Some dynamic descriptors are a requirement but avoid in general. GPU Memory SRD table On-chip root table Pointer to table (textures table 0) SR.T[0][0] SR.T[0] SR.T[0][1] SR.T[1] SR.T[0][2] SR.T[2] SR.T[3] UAV Samp CB.T[0] CB.T[1][0] CB.T[1] CB.T[1][1] Pointer to table (constbuf table 1)

  13. KEY innovations

  14. KEY innovations

  15. NEW PROBLEMS(And tips to solve them)

  16. New visible limits • More draws in does not automatically mean more triangles out • You will not see full rendering rates with triangles averaging 1 pixel each. • Wireframe mode should look different to filled rendering

  17. New visible limits • Feeding the GPU much more efficiently means exploring interesting new limits that weren’t visible before • 10k/frame of anything is ~1µs per thing. • GPU pipeline depth is likely to be 1-10µs (1k-10k cycles). • Specific limit: context registers • Root shader table is NOT in the context • Compute doesn’t bottleneck on context

  18. Application in charge • Application is arbiter of correct rendering • This is a serious responsibility • The benefits of D3D12 aren’t readily available without this condition • Applications must be warning-free on the debug layer • Different opportunities for driver intervention • Consider controlling risk by avoiding riskier techniques

  19. Application in charge • No driver thread in play • App can target much lower latency • BUT implies app has to be ready with new GPU work D3D11: No dead GPU time after 1st frame (but extra latency) Frame 2 App Render Frame 1 Frame 3 First work sent to driver Driver buffers Present; no future dead time F2 Driver F1 F3 Dead Time F2 GPU F1 F3 No buffered present reveals dead time on GPU

  20. Use command buffers sparingly • Each API command list maps to a single hardware command buffer • Starting / ending a command list has an overhead • Writes full 3D state, may flush caches or idle GPU • We think a good rule of thumb will be to target around 100 command buffers/frame • Use the multiple submission API where possible Multiple applications running on system Application 0 queue CB0 CB1 CB2 Application 1 queue CB0 GPU executes CB0 CB1 CB0 CB2

  21. Round-up

  22. All-new • There’s a learning curve here for all of us • In the main it’s a shallow one • Compared at least to the general problem of multithreaded rendering • Multithread is always hard. • Simpler design means fewer bugs and more predictable performance

  23. What AMD plan to deliver • Release driver for Direct3D12 launch • Continuous engagement • With Microsoft • With ISVs • Bring your opinions to us and to Microsoft.

  24. QUESTIONS

More Related