Popis: |
This paper examines how data-driven personalized decisions can be made while preserving consumer privacy. Our setting is one in which the firm chooses a personalized price based on each new customer's vector of individual features; the true set of individual demand-generating parameters is unknown to the firm and so must be estimated from historical data. We extend this classical framework of personalized pricing by requiring also that the firm's pricing policy preserve consumer privacy, or (formally) that it be differentially private -- an industry standard for privacy preservation. The two settings we consider are theoretically and practically relevant: central and local models of differential privacy, which differ in the strength of the privacy guarantees they provide. For both models, we develop privacy-preserving personalized pricing algorithms and derive the theoretical bounds on their performance as measured by the firm's revenue. Our analyses suggest that, if the firm possesses a sufficient amount of historical data, then it can achieve central differential privacy at a cost of the same order as the "classical" loss in revenue due to estimation error. Comparing the two models, we conclude that local differentially private personalized pricing yields better privacy guarantees but leads to much greater revenue loss by the firm. We confirm our theoretical findings in a series of numerical experiments based on synthetically generated and real-world On-line Auto Lending (CPRM-12-001) data sets. Finally, we also apply our theoretical framework to the setting of personalized assortment optimization. |