1 / 17

User Governed Workflows – Responding in Real Time to Business Priority

User Governed Workflows – Responding in Real Time to Business Priority. What this talk is about: Scatter / Gather Patterns in BizTalk What happens when there is a very large number of work items to process… Ideas for an implementation strategy “Pre-Requisites”

pelham
Download Presentation

User Governed Workflows – Responding in Real Time to Business Priority

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. User Governed Workflows – Responding in Real Time to Business Priority • What this talk is about: • Scatter / Gather Patterns in BizTalk • What happens when there is a very large number of work items to process… • Ideas for an implementation strategy • “Pre-Requisites” • Basic understanding of correlation / convoys in BizTalk • Exposure to integration patterns

  2. Scott Colestock Principal - Trace Ventures, LLC blog: www.traceofthought.net

  3. Agenda • Explore Scatter/Gather Techniques in BizTalk • Establish common terms… • Business problem motivating a particular variation… • Drill-down into “User-controllable & governed” Scatter/Gather

  4. Scatter / Gather Background • Gregor Hohpe’s Enterprise Integration Patterns defines “Scatter/Gather” as composition of “Recipient-List” and “Aggregator” • See Enterprise Integration Patterns with BizTalk Server 2004 (diagrams sourced from here!) • Recipient-List: Single message (or type of message) forwarded to multiple recipients – based on criteria like content

  5. Scatter / Gather Background (cont) • Implementations for Recipient-List: • Parallel shape – good for fixed number of synchronous interactions. Each branch applies criteria for inclusion before Send. • Loop with computed recipients – good for dynamic number of async interactions. • Send shape on each loop iteration • Message box (no orch needed) – good for dynamic number of async one-way interactions, especially when criteria (for who gets message) meshes well with BizTalk filters • [Missing from Hohpe: Send Port Groups…]

  6. Recipient-List via Parallel Shape

  7. (Computed) Recipient-List via Loop

  8. Scatter / Gather Background (cont) • Aggregator: Collect sequence of incoming messages and consolidate into single message • (Recipient-List “scatters”; Aggregator “gathers”) • Implementations for Aggregator: • Parallel shape with Receive shapes & “message integration” – optionally inline with the parallel shape used by Recipient-List • Loop that Receives (& integrates responses) on each iteration, and understands when sufficient responses have been received…

  9. Recipient-List with inline Aggregation

  10. Scatter / Gather Background (cont) • Whether you use parallel shapes or loops to implement the scatter and the gather… • The Scatter & Gather can co-exist in the same orchestration • Or exist in distinct orchestrations… • Recipient-List pattern can publish… • Work can be done in subscribing orchestrations, where results are published… • Aggregator can subscribe to results, & publish an aggregated message

  11. What happens when there are a large number of items to scatter? • Typical examples given involve getting quotes from suppliers or shippers (small-scale) • What if I want to execute one or more services on the data represented by each of several thousand elements in a set ? • And then compute an aggregate result ? • Example: Bidding to purchase a large set of assets in a set… • Each asset in a package of thousands requires analysis via several services (“double scatter/gather”) • Results must be persisted and a price for the bid computed

  12. Large Scale Scatter/Gather • Recipient-List loop will be the scatter technique… • Parallel shape would be too wide (kidding) • # of items to scatter is dynamic • Alternative: individual pieces of work published to message box by virtue of external system or de-batching pipeline • Once you have thousands of items to execute against (to “scatter”)…you need “manageability” • You need pause, resume, cancel – initiated through an end-user experience • Recipient-List loop needs to look for and manage interruptions! • …But it is quickly too late!

  13. Large Scale Scatter/Gather (cont) • Work can be “scattered” much more quickly than it can be acted upon • So it will accumulate in the MessageBox as queued work… • Pausing or canceling the “scattering” process isn’t sufficient – at large scale, there will quickly be too much work pending • Do you really want to wait for 1000 orchestrations to spin up, only to have them receive a user cancellation notice and quit?

  14. (Aggregate, aka Gather) (Work on each item) Accumulating work… Scatter work…

  15. Large Scale Scatter/Gather Solution • A “Scattering” loop that on each iteration: • Performs the scatter operation, increments a “work in flight” counter • Looks for Pause/Resume/Cancel messages and implement accordingly • Delay if “work in flight” counter exceeds a defined threshold • A “Gathering” loop that… • Waits for responses, knows when “done” • Decrements “work in flight” counter on each iteration • Key: Prevent us from queuing work that we can’t easily control (pause/resume/cancel)

  16. Demo

  17. Conclusion • Scatter/Gather is a composition of “Recipient List” and “Aggregation” patterns • Several techniques available for Scatter/Gather in BizTalk… • Once the process is very lengthy, “management function” over a given instance of the process will be desirable.

More Related