Estimation of individual treatment effects is often used as the basis for contextual decision making in fields such as healthcare, education, and economics. However, in many real-world applications it is sufficient for the decision maker to have estimates of upper and lower bounds on the potential outcomes under treatment and non-treatment. In these cases, we can get better finite sample efficiency by estimating simple functions that correctly bound the potential outcomes instead of directly estimating the potential outcomes, which may be complex, and hard to estimate. Our theoretical analysis highlights a tradeoff between the complexity of the learning task and the confidence with which the resulting bounds cover the true potential outcomes. Guided by our theoretical findings, we develop an algorithm for learning upper and lower bounds on the potential outcomes under treatment and non-treatment. Our algorithm finds the optimal bound estimates that maximize an objective function defined by the decision maker without violating a required false coverage rate. We demonstrate our algorithm’s performance and highlight how it can be used to guide decision making using a clinical dataset, and a well-known causality benchmark. We show that our algorithm outperforms the state-of-the-art, providing tighter intervals without violating the required false coverage rate.