190 likes | 313 Views
This work by Marco Molinaro and R. Ravi from Carnegie Mellon University explores the complexity of Online Packing Integer Programs (PIPs). It presents a novel algorithm that demonstrates competitive performance regardless of the number of columns, revealing that general PIPs do not become more difficult with larger sizes. The research highlights the relationship between online PIPs and learning strategies, utilizing covering bounds to improve outcomes. This study provides significant insights into decision-making in online scenarios and opens avenues for continued research in algorithmic optimization.
E N D
Geometry of Online Packing Linear Programs Marco Molinaroand R. Ravi Carnegie Mellon University
Packing Integer Programs (PIPs) n • Non-negative c, A, b • Max st • A has entries in [0,1] A x b m ≤
Online Packing Integer Programs • Adversary chooses values for c, A, b • …but columns are presented in random order • …when column comes, set variable to 0/1 irrevocably • b and n are known upfront c A A A A A A x 1 b A 0 ≤ n
Online Packing Integer Programs • Goal: Find feasible solution that maximizes expected value • -competitive:
Previous Results • First online problem: secretary problem [Dynkin 63] • B-secretary problem (m=1, b=B, A is all 1’s) [Kleinberg 05] -competitive for • PIPs (B=min bi) [FHKMS 10, AWY] -competitive for need do not depend on n depends on n
Main Question and Result • Q:Do general PIPs become more difficult for larger n? • A:No! Main result Algorithm -competitive when
High-level Idea • Online PIP as learning • Improving learning error using tailored covering bounds • Geometry of PIPs that allow good covering bounds • Reduce general PIP to above • For this talk: • Every right-hand side • Show weaker bound
Online PIP as Learning • Reduction to learning a classifier[DH 09] Linear classifier: given (dual) vector , 0 0 1 1 0 1 1
Online PIP as Learning • Reduction to learning a classifier[DH 09] Linear classifier: given (dual) vector , Claim:If the classification 𝑥(𝑝) given by satisfies 1) 2) then 𝑥(𝑝) is (1−𝜖) optimal. Moreover, such classification always exists. [Feasible] [Packs tightly] If , then
Online PIP as Learning • Reduction to learning [DH 09] Linear classifier: given (dual) vector , set Claim:If the classification 𝑥(𝑝) given by satisfies 1) 2) then 𝑥(𝑝) is (1−𝜖) optimal. Moreover, such classification always exists. [Feasible] [Packs tightly] If , then
Online PIP as Learning • S fraction of columns • Compute appropriate for sampled IP • Use to classify remaining columns • Solving PIP via learning
Online PIP as Learning • S fraction of columns • Compute appropriate for sampled IP • Use to classify remaining columns • Solving PIP via learning • Probability of learning good classifier: • Consider a classifier that overfills some budget: • Can only learn if sample is skewed. Happens with probability at most • At most distinct bad classifiers • Union bounding over all bad classifiers, learn bad classifier with prob. at most • When to get good classifier with high probability
Online PIP as Learning • Solving PIP via learning Improve this… • Probability of learning good classification: • Consider a classification that overfills some budget: • Can only learn if sample is skewed. Happens with probability at most • At most distinct bad classifications • Union bounding over all bad classifications, learn desired good classification with prob. at least • When to get good classification with high probability
Improved Learning Error • Idea 1: Covering bounds via witnesses (handling multiple bad classifiers at a time) • -witness: is a +-witness of for constraint if • Columns picked by columns picked by • Total occupation of constraint by columns picked by is -witness: similar… Total weight • Lemma: Suppose there is a witness set of size . • Then probability of learning a bad classifier is
Geometry of PIPs with Small Witness Set • For some PIPs, size of witness set is at least • Idea 2: Consider PIPs whose columns lie on few () 1-d subspaces
Geometry of PIPs with Small Witness Set • For some PIPs, size of witness set is at least • Idea 2: Consider PIPs whose columns lie on few () 1-d subspaces =2 • Lemma: For such PIPs, can find witness set of size
Geometry of PIPs with Small Witness Set • Covering bound + witness size: it suffices • Final step: Convert any PIP into one with , loses value Algorithm -competitive when
Conclusion • Guarantee for online PIPs independent of number of columns • Asymptotically matches that for single constraint version [Kleinberg 05] • Ideas • Tailored covering bound based on witnesses • Analyze geometry of columns to obtain small witness set Make the learning problem more robust Open problems • Obtain optimal ? Can do if sample columns with replacement [DJSW 11] • Generalize to AdWords-type problem • Better online models: infinite horizon? less randomness?