Social scientists have long been interested in discrimination and other inherent social inequities; and as such have developed models to evaluate policies through the dual lenses of efficiency and equity. More recently, computer scientists have illustrated show algorithms in many domains inherit and (sometimes inadvertently) bake in these same human biases and inequities. In this talk, I attempt to bring these two strands together: I embed concerns about algorithmic bias within a broader welfare economics framework. Instead of viewing the data as given, it begins with a model of the underlying social phenomena and their accompanying inequities. It then posits a social welfare function, where the social planner cares both about efficiency and equity. In particular, she places greater weight on equity than individual algorithm designers (firms or citizens) do. Intrinsic to this approach is that the social planner's preferences imply desired properties of algorithm: the fairness of a given algorithm is not a primitive; instead, it is derived from the welfare of the outcomes it engenders. Several pieces of conventional wisdom do not hold true in this framework. For example, "blinding the algorithm" to variables such as race generally reduces welfare, even for the disadvantaged group. At the other extreme, I characterize situations where apparently fair algorithms can drastically increase inequities. Overall, I argue that it would be beneficial to model fairness and algorithmic bias more holistically, including both a generative model of the underlying social phenomena and a description of a global welfare function.
We study the impact of dynamic pricing (so-called "surge pricing") on relocation decisions by Uber's driver-partners and the corresponding revenue they collected. Using a natural experiment arising from an outage in the system that produces the surge pricing heatmap for a portion of Uber's driver-partners over 10 major cities, and a difference-in-differences approach, we study the short-run effect that visibility of the surge heatmap has on 1) drivers' decisions to relocate to areas with higher or lower prices and 2) drivers' revenue. We demonstrate that the ability to see the surge heatmap has a statistically significant impact on both outcomes. Ability to see the surge heatmap explains 10%-60% of Uber drivers' self-positioning decisions, attracts drivers toward areas with higher surge prices, and increases drivers' revenue on surged trips by up to 70%. This suggests that dynamic pricing helps drivers move to where riders' demand is largest, and that the resulting reduction in spatial search friction and spatial mismatch improves waiting times and welfare for both riders and drivers.
We assess the impact of home-sharing on residential house prices and rents. Using a dataset of Airbnb listings from the entire United States and an instrumental variables estimation strategy, we find that a 1% increase in Airbnb listings leads to a 0.018% increase in rents and a 0.026% increase in house prices at the median owner-occupancy rate zip code. The effect is moderated by the share of owner-occupiers, a result consistent with absentee landlords reallocating their homes from the long-term rental market to the short-term rental market. A simple model rationalizes these findings.
Over the last fifteen years, one of the major developments online has been the growth and proliferation of review websites such as TripAdvisor. The ready availability of independent information from past users poses interesting questions for marketing strategy. What role does advertising play in the new environment? How should firms adjust their advertising strategy to the presence of reviews? In this paper we address these questions in the context of the hotel industry. Using a data set of TripAdvisor hotel reviews and another describing hotels' advertising expenditures, we show, first, that overall ad spending decreased from 2002 to 2015, suggesting that online reviews have had the effect of displacing advertising. Second, there is a negative causal relationship between TripAdvisor ratings and advertising spending in the cross-section: hotels with higher ratings spend less. This suggests that user ratings and advertising are substitutes, not complements. Third, this relationship is stronger for independent hotels than for chains, and stronger in competitive markets than in noncompetitive markets. The former suggests that a strong brand name provides some immunity to reviews, and the latter suggests that when ratings are pivotal, the advertising response might be particularly strong. Finally, we show that the relationship between user ratings and advertising has strengthened over time, as websites such as TripAdvisor have become more influential. This provides further confirmation that the effect of online ratings on advertising operates through the demand side, and not the supply side. Hotels seem to react to reviews if and only if consumers react to them.
This paper is part of an emerging line of work at the intersection of machine learning and mechanism design, which aims to avoid noise in training data by correctly aligning the incentives of data sources. Specifically, we focus on the ubiquitous problem of linear regression, where strategyproof mechanisms have previously been identified in two dimensions. In our setting, agents have single-peaked preferences and can manipulate only their response variables. Our main contribution is the discovery of a family of group strategyproof linear regression mechanisms in any number of dimensions, which we call generalized resistant hyperplane mechanisms. The game-theoretic properties of these mechanisms --- and, in fact, their very existence --- are established through a connection to a discrete version of the Ham Sandwich Theorem.
We consider a data analyst's problem of purchasing data from strategic agents to compute an unbiased estimate of a statistic of interest. Agents incur private costs to reveal their data and the costs can be arbitrarily correlated with their data. Once revealed, data are verifiable. This paper focuses on linear unbiased estimators. We design an individually rational and incentive compatible mechanism that optimizes the worst-case mean-squared error of the estimation, where the worst-case is over the unknown correlation between costs and data, subject to a budget constraint in expectation. We characterize the form of the optimal mechanism in closed-form. We further extend our results to acquiring data for estimating a parameter in regression analysis, where private costs can correlate with the values of the dependent variable but not with the values of the independent variables.
We consider the problem of optimal dynamic information acquisition from many correlated information sources. Each period, the decision-maker jointly takes an action and allocates a fixed number of observations across the available sources. His payoff depends on the actions taken and on an unknown state. In the canonical setting of jointly normal information sources, we show that the optimal dynamic information acquisition rule proceeds myopically after finitely many periods. If signals are acquired in large blocks each period, then the optimal rule turns out to be myopic from period 1. These results demonstrate the possibility of robust and "simple" optimal information acquisition, and simplify the analysis of dynamic information acquisition in a widely used informational environment.
This paper is an axiomatic study of consistent approval-based multi-winner rules, i.e., voting rules that select a fixed-size group of candidates based on approval ballots. We introduce the class of counting rules, provide an axiomatic characterization of this class and, in particular, show that counting rules are consistent. Building upon this result, we axiomatically characterize three important consistent multi-winner rules: Proportional Approval Voting, Multi-Winner Approval Voting and the Approval Chamberlin--Courant rule. Our results demonstrate the variety of multi-winner rules and illustrate three different, orthogonal principles that multi-winner voting rules may represent: individual excellence, diversity, and proportionality.
Without monetary payments, the Gibbard-Satterthwaite theorem proves that under mild requirements all truthful social choice mechanisms must be dictatorships. When payments are allowed, the Vickrey-Clarke-Groves (VCG) mechanism implements the value-maximizing choice, and has many other good properties: it is strategy-proof, onto, deterministic, individually rational, and does not not make positive transfers to the agents. By Roberts' theorem, with three or more alternatives, the weighted VCG mechanisms are essentially unique for domains with quasi-linear utilities. The goal of this paper is to characterize domains of non-quasi-linear utilities where "reasonable'' mechanisms (with VCG-like properties) exist. Our main result is a tight characterization of the maximal non quasi-linear utility domain, which we call the largest parallel domain. We extend Roberts' theorem to parallel domains, and use the generalized theorem to prove two impossibility results. First, any reasonable mechanism must be dictatorial when the type domain is quasi-linear together with any single non-parallel type. Second, for richer utility domains that still differ very slightly from quasi-linearity, every strategy-proof, onto and deterministic mechanism must be a dictatorship.
As the quality of computer hardware increases over time, cloud service providers have the ability to offer more powerful virtual machines (VMs) and other resources to their customers. But providers face several trade-offs as they seek to make the best use of improved technology. On one hand, more powerful machines are more valuable to customers and command a higher price. On the other hand, there is a cost to develop and launch a new product. Further, the new product competes with existing products. Thus, the provider faces two questions. First, when should new classes of VMs be introduced? Second, how should they be priced, taking into account both the VM classes that currently exist and the ones that will be introduced in the future?
This decision problem, combining scheduling and pricing new product introductions, is common in a variety of settings. One aspect that is more specific to the cloud setting is that VMs are rented rather than sold. Thus existing customers can switch to a new offering, albeit with some inconvenience. There is indeed evidence of customers' aversion to upgrades in the cloud computing services market. Based on a study of Microsoft Azure, we estimate that customers who arrive after a new VM class is launched are 50% more likely to use it than existing customers, indicating that these switching costs may be substantial.
This opens up a wide range of possible policies for the cloud service provider. Our main result is that a surprisingly simple policy is close to optimal in many situations: new VM classes are introduced on a periodic schedule and each is priced as if it were the only product being offered. (Periodic introductions have been noticeable in the practice of cloud computing. For example, Amazon Elastic Compute Cloud (EC2) launched new classes of the m.xlarge series in October 2007, February 2010, October 2012, and June 2015, i.e., in intervals of 28 months, 32 months, and 32 months.) We refer to this pricing policy as Myerson pricing, as these prices can be computed as in Myerson's classic paper (1981). This policy produces a marketplace where new customers always select the newest and best offering, while existing customers may stick with older VMs due to switching costs.
We study whether some of the most important models of decision-making under uncertainty are uniformly learnable, in the sense of PAC (probably approximately correct) learnability. Many studies in economics rely on Savage's model of (subjective) expected utility. The expected utility model is known to predict behavior that runs counter to how many agents actually make decisions (the contradiction usually takes the form of agents' choices in the Ellsberg paradox). As a consequence, economists have developed models of choice under uncertainty that seek to generalize the basic expected utility model. The resulting models are more general and therefore more flexible, and more prone to overfitting. The purpose of our paper is to understand this added flexibility better. We focus on the classical expected utility (EU) model, and its two most important generalizations: Choquet expected utility (CEU) and Max-min Expected Utility (MEU).
Our setting involves an analyst whose task is to estimate or learn an agent's preference based on data available on the agent's choices. A model of preferences is PAC learnable if the analyst can construct a learning rule to precisely learn the agent's preference with enough data. When a model is not learnable we interpret it as the model being susceptible to overfitting. PAC learnability is known to be characterized by the model's VC dimension: thus our paper takes the form of a study of the VC dimension of economic models of choice under uncertainty. We show that EU and CEU have finite VC dimension, and are consequently learnable. Morever, the sample complexity of the former is linear, and of the latter is exponential, in the number of states of uncertainty. The MEU model is learnable when there are two states but is not learnable when there are at least three states, in which case the VC dimension is infinite. Our results also exhibit a close relationship between learnability and the underlying axioms which characterise the model.
We study an online linear classification problem in which the data is generated by strategic agents who manipulate their features in an effort to change the classification outcome. In rounds, the learner deploys a classifier, then an adversarially chosen agent arrives and possibly manipulates her features to optimally respond to the learner's choice of classifier. The learner has no knowledge of the agents' utility functions or "real" features, which may vary widely across agents. Instead, the learner is only able to observe their "revealed preferences", i.e., the manipulated feature vectors they provide. For a broad family of agent cost functions, we give a computationally efficient learning algorithm that is able to obtain diminishing "Stackelberg regret" --- a form of policy regret that guarantees that the learner is realizing loss nearly as small as that of the best classifier in hindsight, even allowing for the fact that agents would have best-responded differently to the optimal classifier.
We develop a model of social learning from overabundant information: Agents have access to many sources of information, and observation of all sources is not necessary in order to learn the payoff-relevant state. Short-lived agents sequentially choose to acquire a signal realization from the best source for them. All signal realizations are public. Our main results characterize two starkly different possible long-run outcomes, and the conditions under which each obtains: (1) efficient information aggregation, where the community eventually achieves the highest possible speed of learning; (2) "learning traps," where the community gets stuck using a suboptimal set of sources and learns inefficiently slowly. A simple property of the correlation structure separates these two possibilities. In both regimes, we characterize which sources are observed in the long run and how often.
We study combinatorial auctions with bidders that exhibit endowment effect. In most of the previous work on cognitive biases in algorithmic game theory (e.g., [Kleinberg and Oren, EC'14] and its follow-ups) the focus was on analyzing the implications and mitigating their negative consequences. In contrast, in this paper we show how in some cases cognitive biases can be harnessed to obtain better outcomes. Specifically, we study Walrasian equilibria in combinatorial markets. It is well known that Walrasian equilibria exist only in limited settings, e.g., when all valuations are gross substitutes, but fails to exist in more general settings, e.g., when the valuations are submodular. We consider combinatorial settings in which bidders exhibit the endowment effect, that is, their value for items increases with ownership. Our main result shows that when the valuations are submodular, even a mild degree of endowment effect is sufficient to guarantee the existence of Walrasian equilibria. In fact, we show that in contrast to Walrasian equilibria with standard utility maximizing bidders -- in which the equilibrium allocation must be efficient -- when bidders exhibit endowment effect any local optimum can be an equilibrium allocation. Our techniques reveal interesting connections between the LP relaxation of combinatorial auctions and local maxima. We also provide lower bounds on the intensity of the endowment effect that the bidders must have in order to guarantee the existence of a Walrasian equilibrium in various settings.
We present tight bounds on the social welfare ratio of the combinatorial clock auction under the basic price increment rule used in practice. Specifically, under the assumption of truthful bidding, we prove that the welfare ratio is Θ(m^\frac23)$ in the case of unit-demand bidders, and Θ(m)$ for general bidders. We explain how changes to the price increment rule effect the welfare guarantee, and extend our results to settings with strategic bidding.
We study the design of core-selecting payment rules for combinatorial auctions (CAs), a challenging setting where no strategyproof rules exist. Unfortunately, under the rule most commonly used in practice, the Quadratic rule (Day and Cramton, 2012), the Bayes-Nash equilibrium strategies are untruthful enough such that truthful play may be an implausible model of bidder behavior, which also raises concerns about revenue and efficiency. In this paper, we present a computational approach for finding good core-selecting payment rules. We present a parametrized payment rule we call Fractional* that takes three parameters (reference point, weights, and amplification) as inputs. This way, we construct and analyze 366 rules across 29 different domains. To evaluate each rule in each domain, we employ a computational Bayes-Nash equilibrium solver. We first use our approach to study the well-known Local-Local Global domain in detail, and identify a set of 20 "all-rounder rules" which beat Quadratic by a significant margin on efficiency, incentives, and revenue in all, or almost all domains. To demonstrate robustness of our findings,we take four of these all-rounder rules and evaluate them in the significantly larger LLLLGG domain (with six bidders and eight goods), where we show that all four rules also beat Quadratic. This suggests that, in practice, auctioneers may want to consider using alternative core-selecting payment rules because of the large improvements over Quadratic that may be available. Overall, our results demonstrate the power of a computational search approach in a properly parametrized mechanism design space.
As online ad offerings become increasingly complex, with multiple size configurations and layouts available to advertisers, the sale of web advertising space increasingly resembles a combinatorial auction with complementarities. Standard ad auction formats do not immediately extend to these settings, and truthful combinatorial auctions, such as the Vickrey-Clarke-Groves auction, can yield unacceptably low revenue. Core selecting auctions, which apply to combinatorial markets, boost revenue by setting prices so that no group of agents, including the auctioneer, can jointly improve their utilities by switching to a different allocation and payments. Among outcomes in the core, bidder-optimal core points have been the most widely studied due to their incentive properties, such as being implementable at natural equilibria.
Prior work in economics has studied heuristics and algorithms for computing approximate bidder-optimal core points, given oracle access to the welfare optimization problem. Relative to prior work, our goal is to develop an algorithm with asymptotically fewer oracle calls while maintaining theoretical performance guarantees. Our main result is a combinatorial algorithm that finds an approximate bidder-optimal core point with almost linear number of calls to the welfare maximization oracle. Our algorithm is faster than previously-proposed heuristics, it has theoretical guarantees, and it reveals some useful structural properties of the core polytope.
We take a two-pronged approach to evaluating our core-pricing method. We first consider a theoretical treatment of the problem of finding a bidder-optimal core point in a general combinatorial auction setting. We then consider an experimental treatment of deploying our algorithm in a highly time-sensitive advertising auction platforms.
We study the computational complexity of proper equilibrium in finite games and prove the following results. First, for two-player games in strategic form we show that the task of simply verifying the proper equilibrium conditions of a given pure Nash equilibrium is NP-complete. Next, for n -player games in strategic form we show that the task of computing an approximation of a proper equilibrium is FIXPa-complete. Finally, for n -player polymatrix games we show that the task of computing a symbolic proper equilibrium is PPAD-complete.
We study a decentralized matching market in which each firm sequentially makes offers to potential workers. For each offer, the worker can choose "accept" or "reject," but the decision is irrevocable. The acceptance of an offer guarantees her job at the firm, but it may also eliminate chances of better offers from other firms in the future. We formulate this market as a perfect-information extensive-form game played by the workers. Each instance of this game has a unique subgame perfect equilibrium (SPE), which does not necessarily lead to a stable matching and has some perplexing properties. Our aim is to establish the complexity of computing the SPE, or more precisely, deciding whether each offer is accepted in the SPE. We show that the tractability of this problem drastically changes according to the number of potential offers related to each firm and worker. If each firm makes offers to at most two workers (or each worker receives offers from at most two firms), then the problem is efficiently solved by a variant of the deferred acceptance algorithm. In contrast, the problem is PSPACE-hard even if both firms and workers are related to at most three offers.
The Big Match is a multi-stage two-player game. In each stage Player 1 hides one or two pebbles in his hand, and his opponent has to guess that number; Player 1 loses a point if Player 2 is correct, and otherwise he wins a point. As soon as Player 1 hides one pebble, the players cannot change their choices in any future stage.
Blackwell and Ferguson (1968) give an ε-optimal strategy for Player 1 that hides, in each stage, one pebble with a probability that depends on the entire past history. Any strategy that depends just on the clock or on a finite memory is worthless. The long-standing natural open problem has been whether every strategy that depends just on the clock and a finite memory is worthless. We prove that there is such a strategy that is ε-optimal. In fact, we show that just two states of memory are sufficient.
Dynamic interaction appears in many real-world scenarios where players are able to observe (perhaps imperfectly) the actions of another player and react accordingly. We consider the baseline representation of dynamic games - the extensive form - and focus on computing Stackelberg equilibrium (SE), where the leader commits to a strategy to which the follower plays a best response. For one-shot games (e.g., security games), strategy-generation (SG) algorithms offer dramatic speed-up by incrementally expanding the strategy spaces. However, a direct application of SG to extensive-form games (EFGs) does not bring a similar speed-up since it typically results in a nearly-complete strategy space. Our contributions are twofold: (1) for the first time we introduce an algorithm that allows us to incrementally expand the strategy space to find a SE in EFGs; (2) we introduce a heuristic variant of the algorithm that is theoretically incomplete, but in practice allows us to find exact (or close-to optimal) Stackelberg equilibrium by constructing a significantly smaller strategy space. Our experimental evaluation confirms that we are able to compute SE by considering only a fraction of the strategy space that often leads to a significant speed-up in computation times.
Despite their better revenue and welfare guarantees for repeated auctions, dynamic mechanisms have not been widely adopted in practice. This is partly due to the complexity of their implementation as well as their unrealistic use of forecasting for future periods. We address these shortcomings and present a new family of dynamic mechanisms that are simple and require no distribution knowledge of future periods.
This paper introduces the concept of non-clairvoyance in dynamic mechanism design, which is a measure-theoretic restriction on the information that the seller can use. A dynamic mechanism is non-clairvoyant if the allocation and pricing rule at each period does not depend on the type distributions in future periods. We develop a framework (bank account mechanisms) for characterizing, designing, and proving lower bounds for dynamic mechanisms (clairvoyant or non-clairvoyant). This framework is used to characterize the revenue extraction power of non-clairvoyant mechanisms with respect to mechanisms that are allowed unrestricted use of distributional knowledge.
We study revenue optimization in a repeated auction between a single seller anda single buyer. Traditionally, the design of repeated auctions requires strong modeling assumptions about the bidder behavior, such as it being myopic, infinite lookahead, or some specific form of learning behavior. Is it possible to design mechanisms which are simultaneously optimal against a multitude of possible buyer behaviors? We answer this question by designing a simple state-based mechanism that is simultaneously approximately optimal against a k -lookahead buyer for all k , a buyer who is a no-regret learner, and a buyer who is a policy-regret learner. Against each type of buyer our mechanism attains a constant fraction of the optimal revenue attainable against that type of buyer. We complement our positive results with almost tight impossibility results, showing that the revenue approximation tradeoffs achieved by our mechanism for different lookahead attitudes are near-optimal.
One of the most tantalizing and long-standing open problems in mechanism design is profit maximization in multi-item, multi-buyer settings. Much of the literature surrounding this problem rests on the strong assumption that the mechanism designer knows the distribution over buyers' values. In reality, this information is rarely available. The support of the distribution alone is often doubly exponential, so obtaining and storing the distribution is impractical. We relax this assumption and instead assume that the mechanism designer only has a set of samples from the distribution. We develop learning-theoretic foundations of sample-based mechanism design. In particular, we provide generalization guarantees which bound the difference between the empirical profit of a mechanism over a set of samples and its expected profit on the unknown distribution.
In this paper, we present a general theory for deriving worst-case generalization bounds in multi-item settings, as well as data-dependent guarantees when the distribution over buyers' values is well-behaved. We analyze mechanism classes that have not yet been studied in the sample-based mechanism design literature and match or improve over the best-known guarantees for many of the special classes that have been studied. The classes we study include randomized mechanisms, pricing mechanisms, multi-part tariffs, and generalized VCG auctions such as affine maximizer auctions.
The literature on "mechanism design from samples," which has flourished in recent years at the interface of economics and computer science, offers a bridge between the classic computer-science approach of worst-case analysis (corresponding to "no samples") and the classic economic approach of average-case analysis for a given Bayesian prior (conceptually corresponding to the number of samples tending to infinity). Nonetheless, the two directions studied so far are two extreme and almost diametrically opposed directions: that of asymptotic results where the number of samples grows large, and that where only a single sample is available. In this paper, we take a first step toward understanding the middle ground that bridges these two approaches: that of a fixed number of samples greater than one. In a variety of contexts, we ask what is possibly the most fundamental question in this direction: are two samples really better than one sample?. We present a few surprising negative results, and complement them with our main result: showing that the worst-case, over all regular distributions, expected-revenue guarantee of the Empirical Revenue Maximization algorithm given two samples is greater than that of this algorithm given one sample. The proof is technically challenging, and provides the first result that shows that some deterministic mechanism constructed using two samples can guarantee more than one half of the optimal revenue.
We build a natural connection between the learning problem, co-training, and forecast elicitation without verification (related to peer-prediction) and address them simultaneously using the same information theoretic approach. In co-training/multiview learning, the goal is to aggregate two views of data into a prediction for a latent label. We show how to optimally combine two views of data by reducing the problem to an optimization problem. Our work gives a unified and rigorous approach to the general setting. In forecast elicitation without verification we seek to design a mechanism that elicits high quality forecasts from agents in the setting where the mechanism does not have access to the ground truth. By assuming the agents' information is independent conditioning on the outcome, we propose mechanisms where truth-telling is a strict equilibrium for both the single-task and multi-task settings. Our multi-task mechanism additionally has the property that the truth-telling equilibrium pays better than any other strategy profile and strictly better than any other "non-permutation" strategy profile.
A central question of crowdsourcing is how to elicit expertise from agents. This is even more difficult when answers cannot be directly verified. A key challenge is that sophisticated agents may strategically withhold effort or information when they believe their payoff will be based upon comparison with other agents whose reports will likely omit this information due to lack of effort or expertise. Our work defines a natural model for this setting based on the assumption that more sophisticated agents know the beliefs of less sophisticated agents. We then provide a mechanism design framework for this setting. From this framework, we design several novel mechanisms, for both the single and multiple tasks settings, that (1) encourage agents to invest effort and provide their information honestly; (2) output a correct "hierarchy" of the information when agents are rational.
Society uses the following game to decide on the supply of a public good. Each agent can choose whether or not to contribute to the good. Contributions are collected and the good is supplied whenever total contributions exceed a threshold. We study the case where the public good is excludable, agents have a common value and each agent receives a private signal about the common value. This game models a standard crowdfunding setting as it is executed in popular crowdfunding platforms such as Kickstarter and Indiegogo. We study how well crowdfunding performs from the firm's perspective, in terms of market penetration, and how it performs from the perspective of society, in terms of efficiency.
Accurately modeling human decision-making in security is critical to thinking about when, why, and how to recommend that users adopt certain secure behaviors. In this work, we conduct behavioral economics experiments to model the rationality of end-user security decision-making in a realistic online experimental system simulating a bank account. We ask participants to make a financially impactful security choice, in the face of transparent risks of account compromise and benefits offered by an optional security behavior (two-factor authentication). We measure the cost and utility of adopting the security behavior via measurements of time spent executing the behavior and estimates of the participant's wage. We find that more than 50% of our participants made rational (e.g., utility optimal) decisions, and we find that participants are more likely to behave rationally in the face of higher risk. Additionally, we find that users' decisions can be modeled well as a function of past behavior (anchoring effects), knowledge of costs, and to a lesser extent, users' awareness of risks and context (R2=0.61). We also find evidence of endowment effects, as seen in other areas of economic and psychological decision-science literature, in our digital-security setting. Finally, using our data, we show theoretically that a "one-size-fits-all" emphasis on security can lead to market losses, but that adoption by a subset of users with higher risks or lower costs can lead to market gains.
The first part of this talk will cover the argument in Budish, Cramton and Shim [1] that the predominant market design used by financial exchanges around the world, called the continuous limit order book, is flawed. The flaw - essentially, a glitch introduced in the transition from human-based financial exchanges to electronic ones - is to treat time as a continuous variable, and process requests to trade serially. This combination of continuous time and serial processing creates mechanical arbitrage opportunities based on symmetric public information --- a violation of efficient markets theory, built right into the market design! --- which in turn harms liquidity provision and induces a never-ending, socially wasteful, arms race for speed. That is, much of high-frequency trading is a symptom of flawed market design. We propose an alternative market design, called frequent batch auctions, in which time is discrete and orders are batch processed using auctions, and show that this directly fixes the problem with the continuous market. The second part of this talk will ask whether the "market will fix the market?" that is, whether market forces will lead to the adoption of discrete-time trading instead of continuous-time trading, or whether a regulatory intervention would be needed. This requires a model of how modern stock exchanges compete and earn profits, the subject of new work-in-progress with Lee and Shim [2]. The model shows that competition among stock exchanges is fierce on the dimension of traditional trading fees, but that exchanges have market power in the sale of exchange-specific speed technology - i.e., arms for the arms race. We then use the model to study the private and social returns to market design innovation. We find that even though the social returns to adopting discrete-time trading would be large and positive, the private returns can be negative, due to the loss of rents from speed technology. We discuss some modest policy interventions that could tip the balance of incentives and encourage the "market to fix the market". Last, I will discuss some open questions about financial market design that lie at the intersection of economics and computer science. [1]Budish, E, P. Cramton and J. Shim. 2015. The High-Frequency Trading Arms Race: Frequent Batch Auctions as a Market Design Response. Quarterly Journal of Economics, pg. 1547-1621. [2]Budish, E, R. Lee and J. Shim. 2018. Will the Market Fix the Market? A Theory of Stock Exchange Competition and Innovation. Manuscript in Preparation.
We consider the stochastic matching problem. An edge-weighted general (i.e., not necessarily bipartite) graph G(V, E) is given in the input, where each edge in E is realized independently with probability p ; the realization is initially unknown, however, we are able to query the edges to determine whether they are realized. The goal is to query only a small number of edges to find a realized matching that is sufficiently close to the maximum matching among all realized edges. The stochastic matching problem has received a considerable attention during the past decade after the initial paper of Chen et al. [ICALP'09] because of its numerous real-world applications in kidney-exchange, matchmaking services, online labor markets, and advertisements. Most relevant to our work are the recent papers of Blum et al. [EC'15], Assadi et al. [EC'16, EC'17] and Maehara and Yamaguchi~[SODA'18] that consider the same model of stochastic matching. Our main result is an adaptive algorithm that for any arbitrarily small ε > 0, finds a (1-ε)-approximation in expectation, by querying only O(1) edges per vertex. We further show that our approach leads to a (1/2-ε)-approximate non-adaptive algorithm that also uses $O(1)$ edges per vertex. Prior to our work, no nontrivial approximation was known for weighted graphs using a constant per-vertex budget. The state-of-the-art adaptive (resp. non-adaptive) algorithm of Maehara and Yamaguchi achieves a (1-ε)-approximation (resp. (1/2-ε)-approximation) by querying up to O(w łogn) edges per vertex where w denotes the maximum integer edge-weight. Our result is a substantial improvement over this bound and has an appealing message: No matter what the structure of the input graph is, one can get arbitrarily close to the optimum solution by querying only a constant number of edges per vertex. To obtain our results, we introduce novel properties of a generalization of augmenting paths to weighted matchings that may be of independent interest.
We investigate the class of school choice mechanisms that are first-choice maximal (FCM) (i.e., they match a maximal number of students to their reported first choices) and first-choice stable (FCS) (i.e., no students form blocking pairs with their reported first choices). FCM is a ubiquitous desideratum in school choice, and we show that FCS is the only rank-based relaxation of stability that is compatible with FCM. The class of FCM and FCS mechanisms includes variants of the well-known Boston mechanism as well as certain Asymmetric Chinese Parallel mechanisms. Regarding incentives, we show that while no mechanism in this class is strategyproof, the Pareto efficient ones are least susceptible to manipulation. Regarding student welfare, we show that the Nash equilibrium outcomes of these mechanisms correspond precisely to the set of stable matchings. By contrast, when some students are sincere, we show that more students may be matched to their true first choices in equilibrium than under any stable matching. Finally, we show how our results can be used to obtain a new characterization of the Boston mechanism (i.e., the most widely used FCM and FCS mechanism). On a technical level, this paper provides new insights about an influential class of school choice mechanisms. For practical market design, our results yield a potential rationale for the popularity of FCM and FCS mechanisms in practice.
We thoroughly study a generalized version of the famous Stable Marriage problem, now based on multi-modal preference lists. The central twist herein is to allow each agent to rank its potentially matching counterparts based on more than one "evaluation mode" (e.g., more than one criterion); thus, each agent is equipped with multiple preference lists, each ranking the counterparts in a possibly different way. We introduce and study three natural concepts of stability, investigate their mutual relations and focus on computational complexity aspects with respect to computing stable matchings in these new scenarios. Mostly encountering computational hardness (NP-hardness), we can also spot few islands of tractability and make a surprising connection to the Graph Isomorphism problem.
There are many settings in which a principal performs a task by delegating it to an agent, who searches over possible solutions and proposes one to the principal. This describes many aspects of the workflow within organizations, as well as many of the activities undertaken by regulatory bodies, who often obtain relevant information from the parties being regulated through a process of delegation. A fundamental tension underlying delegation is the fact that the agent's interests will typically differ -- potentially significantly -- from the interests of the principal, and as a result the agent may propose solutions based on their own incentives that are inefficient for the principal. A basic problem, therefore, is to design mechanisms by which the principal can constrain the set of proposals they are willing to accept from the agent, to ensure a certain level of quality for the principal from the proposed solution. In this work, we investigate how much the principal loses -- quantitatively, in terms of the objective they are trying to optimize -- when they delegate to an agent. We develop a methodology for bounding this loss of efficiency, and show that in a very general model of delegation, there is a family of mechanisms achieving a universal bound on the ratio between the quality of the solution obtained through delegation and the quality the principal could obtain in an idealized benchmark where they searched for a solution themself. Moreover, it is possible to achieve such bounds through mechanisms with a natural threshold structure, which are thus structurally simpler than the optimal mechanisms typically considered in the literature on delegation. At the heart of our framework is an unexpected connection between delegation and the analysis of prophet inequalities, which we leverage to provide bounds on the behavior of our delegation mechanisms.
In the Prophet Secretary problem, samples from a known set of probability distributions arrive one by one in a uniformly random order, and an algorithm must irrevocably pick one of the samples as soon as it arrives. The goal is to maximize the expected value of the sample picked relative to the expected maximum of the distributions. This is one of the most simple and fundamental problems in online decision making that models the process selling one item to a sequence of costumers. For a closely related problem called the Prophet Inequality where the order of the random variables is adversarial, it is known that one can achieve in expectation 1/2 of the expected maximum, and no better ratio is possible. For the Prophet Secretary problem, that is, when the variables arrive in a random order, Esfandiari et al. (2015) showed that one can actually get 1-1/e of the maximum. The 1-1/e bound was recently extended to more general settings by Ehsani et al. (2018). Given these results, one might be tempted to believe that 1-1/e is the correct bound. We show that this is not the case by providing an algorithm for the Prophet Secretary problem that beats the 1-1/e bound and achieves 1-1/e+1/400 times the expected maximum. We also prove a hardness result on the performance of algorithms under a natural restriction which we call deterministic distribution-insensitivity.
The prophet and secretary problems demonstrate online scenarios involving the optimal stopping theory. In a typical prophet or secretary problem, selection decisions are assumed to be immediate and irrevocable. However, many online settings accommodate some degree of revocability. To study such scenarios, we introduce the l-out-of- k setting, where the decision maker can select up to k elements immediately and irrevocably, but her performance is measured by the top l elements in the selected set. Equivalently, the decision makes can hold up to l elements at any given point in time, but can make up to k-l returns as new elements arrive. We give upper and lower bounds on the competitive ratio of l-out-of- k prophet and secretary scenarios. For l-out-of- k prophet scenarios we provide a single-sample algorithm with competitive ratio 1-l· e-Θ((k-l)2/k) . The algorithm is a single-threshold algorithm, which sets a threshold that equals the (l+k/2)th highest sample, and accepts all values exceeding this threshold, up to reaching capacity k . On the other hand, we show that this result is tight if the number of possible returns is linear in l (i.e., k-l =Θ(l)). In particular, we show that no single-sample algorithm obtains a competitive ratio better than 1 - 2-(2k+1)/k+1 . We also present a deterministic single-threshold algorithm for the 1-out-of- k prophet setting which obtains a competitive ratio of 1-3/2 · e-s/k 6, knowing only the distribution of the maximum value. This result improves the result of [Assaf & Samuel-Cahn, J. of App. Prob., 2000].
We study the classic setting where two agents compete against each other in a zero-sum game by applying the Multiplicative Weights Update (MWU) algorithm. In a twist of the standard approach of [Freund and Schapire 1999], we focus on the K-L divergence from the equilibrium but instead of providing an upper bound about the rate of increase we provide a nonnegative lower bound for games with interior equilibria. This implies movement away from equilibria and towards the boundary. In the case of zero-sum games without interior equilibria convergence to the boundary (and in fact to the minimal product of subsimplexes that contains all Nash equilibria) follows via an orthogonal argument. In that subspace divergence from the set of NE applies for all nonequilibrium initial conditions via the first argument. We argue the robustness of this non-equilibrating behavior by considering the following generalizations: Step size: Agents may be using different and even decreasing step sizes. Dynamics: Agents may be using Follow-the-Regularized-Leader algorithms and possibly apply different regularizers (e.g. MWU versus Gradient Descent). We also consider a linearized version of MWU. More than two agents: Multiple agents can interact via arbitrary networks of zero-sum polymatrix games and their affine variants. Our results come in stark contrast with the standard interpretation of the behavior of MWU (and more generally regret minimizing dynamics) in zero-sum games, which is typically referred to as "converging to equilibrium". If equilibria are indeed predictive even for the benchmark class of zero-sum games, agents in practice must deviate robustly from the axiomatic perspective of optimization driven dynamics as captured by MWU and variants and apply carefully tailored equilibrium-seeking behavioral dynamics.
Negative frequency-dependent selection (i.e., declining fitness with increased frequency in the population) is thought to be one of the factors that maintains biological diversity. In this paper, we give a concrete mathematical argument supporting this. Our model is as follows: A collection of species derive their fitnesses via a rock-paper-scissors-type game whose precise payoffs are a function of the environment. The new aspect of our model lies in adding a feedback loop: the environment changes according to the relative fitnesses of the species (hence, payoffs change as a function of fitness, which in turn changes as a function of payoffs). The changes in the payoffs are in keeping with the principle of negative frequency-dependent selection, which is widespread in nature. In order to model our game as a continuous time dynamical system, we cast it in the setting of a differential game. We show that for certain parameters, this dynamics cycles, i.e., no species goes extinct and diversity is maintained. We believe that our techniques can be applied to optimization and machine learning to show that first order methods (e.g., gradient descent/ascent) do cycle even in online settings in which the loss function changes with time.
A major goal in Algorithmic Game Theory is to justify equilibrium concepts from an algorithmic and complexity perspective. One appealing approach is to identify natural distributed algorithms that converge quickly to an equilibrium. This paper established new convergence results for two generalizations of proportional response in Fisher markets with buyers having CES utility functions. The starting points are respectively a new convex and a new convex-concave formulation of such markets. The two generalizations correspond to suitable mirror descent algorithms applied to these formulations. Several of our new results are a consequence of new notions of strong Bregman convexity and of strong Bregman convex-concave functions, and associated linear rates of convergence, which may be of independent interest. Among other results, we analyze a damped generalized proportional response and show a linear rate of convergence in a Fisher market with buyers whose utility functions cover the full spectrum of CES utilities aside the extremes of linear and Leontief utilities; when these utilities are included, we obtain an empirical $O(1/T)$ rate of convergence.
We consider a setting where an auctioneer sells a single item to n potential agents with interdependent values. That is, each agent has her own private signal, and the valuation of each agent is a known function of all n private signals. This captures settings such as valuations for oil drilling rights, broadcast rights, pieces of art, and many more.
In the interdependent value setting, all previous work has assumed a so-called single-crossing condition. Single-crossing means that the impact of a private signal, si , on the valuation of agent i , is greater than the impact of si on the valuation of any other agent. It is known that without the single-crossing condition, an efficient outcome cannot be obtained. We ask what approximation to the optimal social welfare can be obtained when valuations do not exhibit single-crossing.
We show that, in general, without the single-crossing condition, one cannot hope to approximate the optimal social welfare any better than assigning the item to a random bidder. Consequently, we consider a relaxed version of single-crossing, c-single-crossing, with some parameter c≥1 , which means that the impact of si on the valuation of agent i is at least 1/ c times the impact of si on the valuation of any other agent ( c =1 is single-crossing). Using this relaxed notion, we obtain a host of positive results. These include a prior-free universally truthful 2√ nc3/2 -approximation to welfare, and a prior-free deterministic ( n -1) c -approximation to welfare. Under appropriate concavity conditions, we improve this to a prior-free universally truthful 2 c -approximation to welfare as well as a universally truthful O(c2)-approximation to the optimal revenue.
Consider an extensive-form mechanism, run by an auctioneer who communicates sequentially and privately with agents. Suppose the auctioneer can make any deviation that no single agent can detect. We study the mechanisms such that it is incentive-compatible for the auctioneer not to deviate - the credible mechanisms. Consider the optimal auctions in which only winners make transfers. The first-price auction is the unique credible static mechanism. The ascending auction is the unique credible strategy-proof mechanism.
The seminal impossibility result of Myerson and Satterthwaite [1983] states that for bilateral trade, there is no mechanism that is individually rational (IR), incentive compatible (IC), weakly budget balanced, and efficient. This has led follow-up work on two-sided trade settings to weaken the efficiency requirement and consider approximately efficient simple mechanisms, while still demanding the other properties. The current state-of-the-art of such mechanisms for two-sided markets can be categorized as giving one (but not both) of the following two types of approximation guarantees on the gains from trade : a constant ex-ante guarantee, measured with respect to the second-best efficiency benchmark, or an asymptotically optimal ex-post guarantee, measured with respect to the first-best efficiency benchmark. Here the second-best efficiency benchmark refers to the highest gains from trade attainable by any IR, IC and weakly budget balanced mechanism, while the first-best efficiency benchmark refers to the maximum gains from trade (attainable by the VCG mechanism, which is not weakly budget balanced).
In this paper, we construct simple mechanisms for double-auction and matching markets that simultaneously achieve both types of guarantees: these are ex-post IR, Bayesian IC, and ex-post weakly budget balanced mechanisms that 1) ex-ante guarantee a constant fraction of the gains from trade of the second-best, and 2) ex-post guarantee a realization-dependent fraction of the gains from trade of the first-best, such that this realization-dependent fraction converges to 1 (full efficiency) as the market grows large.
We believe that mechanisms that not only provide a worst-case guarantee, but also provide a guarantee of performing very well on "easy instances" are appealing and more likely to be used. In our setting, we implement this agenda by postulating that "nice instances" are large-market instances, and we are able to achieve the best of both worlds: a guaranteed constant approximation on one hand, and asymptotic optimality when the markets are large on the other hand. We believe that presenting similar results in other settings is an interesting research direction.
Network pricing games provide a framework for modeling real-world settings with two types of strategic agents: owners (operators) of the network and users of the network. Owners of the network post a price for usage of the link they own so as to attract users and maximize profit; users of the network select routes based on price and level of use by other users. We point out that an equilibrium in these games may not exist, may not be unique and may induce an arbitrarily inefficient network performance. Our main result is to observe that a simple regulation on the network owners market solves all three issues above. Specifically, if an authority could set appropriate caps (upper bounds) on the tolls (prices) operators can charge, then: the game among the link operators has a unique and strong Nash equilibrium and the users' game results in a Wardrop equilibrium that achieves the optimal total delay. We call any price vector with these properties a great set of tolls. As a secondary objective, we want to compute great tolls that minimize total users' payments and we provide a linear program that does this. We obtain multiplicative approximation results compared to the optimal total users' payments for arbitrary networks with polynomial latencies of bounded degree, while in the single-commodity case we obtain a bound that only depends on the topology of the network. Lastly, we show how the same mechanism of setting appropriate caps on the allowable prices extends to the model of elastic demands.
We consider bottleneck congestion games in an arbitrary graph G where player strategies are flows in G . The player's objective is to select a flow that minimizes the maximum load on any edge, that is, minimize the bottleneck congestion. We consider splittable and unsplittable games with pure strategies. It has been an open problem for many years to determine whether it is possible to compute in polynomial time Nash equilibriums for bottleneck congestion games in arbitrary graphs. For splittable games we provide a polynomial time algorithm to compute a Nash equilibrium which is also a global optimum. The unsplittable game problem is known to be PLS-complete, and so we focus on approximate Nash equilibria where players are approximately stable. For uniform player demands we give an algorithm to compute a O(łog m)-approximate unsplittable equilibrium in polynomial time, where m is the number of edges. For non-uniform player demands we give an algorithm to compute a O(ζ łog(ζ m))-approximate unsplittable equilibrium in polynomial time, where ζ = O(1 + łog (dmax /dmin)) and dmax, dmin are the respective maximum and minimum player demands. To our knowledge, these are the first general results for efficiently computing equilibria of pure bottleneck congestion games in arbitrary graphs, both for the splittable and unsplittable cases.
Protecting valuable \em targets from an adversary is an ever-important international concern with far-reaching applications in wildlife protection, border protection, counter-terrorism, protection of ships from piracy, etc. As a successful recent approach, \em security games cast these issues as two-player games between a \em defender and an \em attacker. The defender decides on how to allocate the available \em resources to protect targets against the attacker who strives to inflict damage on them. The main question of interest here is equilibrium computation. Our focus in this paper is on \em spatio-temporal security games. However, inspired by the paper of Xu [EC'16], we start with a general model of security games and show that any approximation (of any factor) for the defender's best response (DBR) problem leads to an approximation of the same factor for the actual game. In most applications of security games, the targets are mobile. This leads to a well-studied class of succinct games, namely \em spatio-temporal security games, that is played in space and time. In such games, the defender has to specify a time-dependent patrolling strategy over a spatial domain to protect a set of moving targets. We give a generalized model of prior spatio-temporal security games that is played on a base graph G . That is, the patrols can be placed on the vertices of G and move along its edges over time. This unifies and generalizes prior spatio-temporal models that only consider specific spatial domains such as lines or grids. Graphs can further model many other domains of practical interest such as roads, internal maps of buildings, etc. Finding an optimal defender strategy becomes NP-hard on general graphs. To overcome this, we give an LP relaxation of the DBR problem and devise a rounding technique to obtain an almost optimal integral solution. More precisely, we show that one can achieve a $(1-ε)$-approximation in polynomial time if we allow the defender to use $łceil łn(1/ε)\rceil$ times more patrols. We later show that this result is in some sense the best possible polynomial time algorithm (unless P=NP). Furthermore, we show that by using a novel \em dependent rounding technique, the same LP relaxation gives an optimal solution for specific domains of interest, such as one-dimensional spaces. This result simplifies and improves upon the prior algorithm of Behnezhad et al. ~[EC'17] on several aspects and can be generalized to other graphs of interest such as cycles. Lastly, we note that most prior algorithms for security games assume that the attacker attacks only once and become intractable for a super-constant number of attacks. Our algorithms are fully polynomial in the input size and work for any given number of attacks.
We study revenue maximization by deterministic mechanisms for the simplest case for which Myerson's characterization does not hold: a single seller selling two items, with independently distributed values, to a single additive buyer. We prove that optimal mechanisms are submodular and hence monotone. Furthermore, we show that in the IID case, optimal mechanisms are symmetric. Our characterizations are surprisingly non-trivial, and we show that they fail to extend in several natural ways, e.g. for correlated distributions or more than two items. In particular, this shows that the optimality of symmetric mechanisms does not follow from the symmetry of the IID distribution.
We analyze the revenue loss due to market shrinkage. Specifically, consider a simple market with one item for sale and n bidders whose values are drawn from some joint distribution. Suppose that the market shrinks as a single bidder retires from the market. Suppose furthermore that the value of this retiring bidder is fixed and always strictly smaller than the values of the other bidders. We show that even this slight decrease in competition might cause a significant fall of a multiplicative factor of 1/e+1 ~0.268 in the revenue that can be obtained by a dominant strategy ex-post individually rational mechanism. In particular, our results imply a solution to an open question that was posed by Dobzinski, Fu, and Kleinberg [STOC'11].
A sequence of recent studies show that even in the simple setting of a single seller and a single buyer with additive, independent valuations over m items, the revenue-maximizing mechanism is prohibitively complex. This problem has been addressed using two main approaches: Approximation: the best of two simple mechanisms (sell each item separately, or sell all the items as one bundle) gives 1/6 of the optimal revenue [1]. Enhanced competition: running the simple VCG mechanism with additional m buyers extracts at least the optimal revenue in the original market [17]. Both approaches, however, suffer from severe drawbacks: On the one hand, losing 83% of the revenue is hardly acceptable in any application. On the other hand, attracting a linear number of new buyers may be prohibitive. We show that by combining the two approaches one can achieve the best of both worlds. Specifically, for any constant ε one can obtain a (1-ε) fraction of the optimal revenue by running simple mechanisms --- either selling each item separately or selling all items as a single bundle --- with substantially fewer additional buyers: logarithmic, constant, or even none in some cases.
"Randomized experiments are increasingly central to innovation in many fields. In the tech sector, major platforms run thousands of experiments (called A/B tests) each year on tens of millions of users at any given time and use the results to screen most product innovations. In the policy and academic circles, governments, nonprofit organizations, and academics use randomized control trials to evaluate social programs and shape public policy. Experiments are not only prevalent, but also highly heterogeneous in design. Policy makers and tech giants typically focus on a "go big" approach, obtaining large sample sizes for a small number of experiments to ensure they that can detect even small benefits of a policy intervention. In contrast, many start-ups and entrepreneurs take a different "go lean" approach, running many small tests and discarding any innovation without outstanding success. The idea is to quickly and cheaply experiment with many ideas, abandon or pivot from ideas that do not work, and scale up ideas that do work. In this paper, we study when each of these approaches is appropriate. To do so, we propose a new framework for optimal experimentation that we call the A/B testing problem. The frameworks also yields an optimal strategy of what innovations to implement and methods to calculate the value of data and experimentation. The key insight is that the optimal experimentation strategy depends crucially on the tails of the distribution of innovation quality, and whether these tails have "black swan" outliers, of innovations with a very large positive or negative impact. The A/B testing problem is as follows. A firm has a set of potential innovations i=1,-,I to implement. The quality ..._i of innovation i is unknown and comes from a distribution G. Quality is independently distributed across innovations. The firm selects a number of users n_i to allocate to an A/B test evaluating innovation i. This yields a signal with mean equal to the true quality of idea i and variance a^2/n_i. The firm is subject to the constraint that the total number of users assigned to experiments is no greater than the number N of users available for experimentation. After seeing the realization of the signals, the firm selects a subset S of ideas to implement. The firm's objective is to maximize the expected sum of the true quality of the ideas that are implemented.
The BDM mechanism, introduced by Becker, DeGroot, and Marschack in the 1960's, employs a second-price auction against a random bidder to elicit the willingness to pay of a consumer. The BDM mechanism has been recently used as a treatment assignment mechanism in order to estimate the treatment effects of policy interventions while simultaneously measuring the demand for the intervention. In this work, we develop a personalized extension of the classic BDM mechanism, using modern machine learning algorithms to predict an individual's willingness to pay and personalize the "random bidder" based on covariates associated with each individual. We show through a mock experiment on Amazon Mechanical Turk that our personalized BDM mechanism results in a lower cost for the experimenter, provides better balance over covariates that are correlated with both the outcome and willingness to pay, and eliminates biases induced by ad-hoc boundaries in the classic BDM algorithm. We expect our mechanism to be of use for policy evaluation and market intervention experiments, in particular in development economics. Personalization can provide more efficient resource allocation when running experiments while maintaining statistical correctness.
In recent years, Google has entered into a growing set of markets that are adjacent to its general search engine, launching new products such as Google Flights, Google Shopping, and Google Local. To leverage its dominance in general search, Google has often tied these products to its general search page, excluding competitor content. In this paper, we explore the strategic tradeoffs platforms face when pursuing a tying strategy to enter complementary markets, empirically investigating Google's decision to tie Google Local (its reviews product) to its search engine. We experimentally vary the content displayed on top of organic search results (called the "Onebox"), to show either exclusively Google reviews (the status quo), or reviews from multiple platforms that are determined to be the best-performing by Google's own organic search algorithm. We find that users prefer the OneBox and are more likely to engage with it when it includes competitor content. This suggests that Google's strategy of excluding competitor content leaves customers with a lower quality product. However, the number of reviews contained in Google Local has increased rapidly in recent years, suggesting that there may be a benefit of excluding competitors as well. Overall, these results shed light on a strategic tradeoff for companies that enter a complementary market using a tying strategy: while companies have successfully used this strategy, it can also come at a cost to consumers in the form of a lower-quality product.
A solution to marketplace information asymmetries is to have trading partners publicly rate each other post-transaction. Many have shown these ratings are effective; we show that their effectiveness deteriorates over time. The problem is that ratings are prone to inflation, with raters feeling pressure to leave "above average" ratings, which in turn pushes the average higher. This pressure stems from raters' desire to not harm the rated seller. As the potential to harm is what makes ratings effective, reputation systems, as currently designed, sow the seeds of their own irrelevance.
In the standard form of mechanism design, a key assumption is that the designer has reliable information and technology to determine a prior distribution over types of the agents. In the meanwhile, as pointed out by the Wilson's Principle, a mechanism should rely as little as possible on the prior type distribution. In this paper, we put forward a simple model to formalize this statement. In our model, each agent has a true type distribution, according to which his type is drawn. In addition, the agent is able to commit to a fake type distribution and bids rationally as if his type were from the fake distribution (e.g., plays a Bayes equilibrium under the fake distributions). We investigate the equilibria of the induced distribution-reporting games among bidders, under the context of single-item auctions. We obtain several interesting findings: (1) the game induced by Myerson auction under our model is strategically equivalent to the first-price auction under the standard model. Consequently, the two games are revenue-equivalent. (2) the second-price auction, a well known prior independent auction, yields (weakly) more revenue than several reserve-based and virtual-value-based truthful, prior-dependent auctions, under our model. Our results complement the current literature which aims to show the superiority of prior-independent mechanisms.
Auctions are widely used in practice. While also extensively studied in the literature, most of the developments rely on significant assumptions about common knowledge on the seller and buyers' sides. In this work, we study the design of optimal prior-independent selling mechanisms. In particular, the seller faces buyers whose values are drawn from an unknown distribution, and only knows that the distribution belongs to a particular class. We analyze a competitive ratio objective, in which the seller attempts to optimize the worst-case fraction of revenues garnered compared to those of an oracle with knowledge of the distribution. Our results are along two dimensions. We first characterize the structure of optimal mechanisms. Leveraging such structure, we then establish tight lower and upper bounds on performance, leading to a crisp characterization of optimal performance for a spectrum of families of distributions. In particular, our results imply that a second price auction is an optimal mechanism when the seller only knows that the distribution of buyers has a monotone increasing hazard rate, and guarantees at least 71.53% of the optimal revenue against any distribution within this class. Furthermore, a second price auction is near-optimal when the class of admissible distributions is that of those with increasing virtual values (aka regular). Under this class, it guarantees a fraction of 50% of optimal revenues and no mechanism can guarantee more than 55.6%.
We address online learning in complex auction settings, such as sponsored search auctions, where the value of the bidder is unknown to her, evolving in an arbitrary manner and observed only if the bidder wins an allocation. We leverage the structure of the utility of the bidder and the partial feedback that bidders typically receive in auctions, in order to provide algorithms with regret rates against the best fixed bid in hindsight, that are exponentially faster in convergence in terms of dependence on the action space, than what would have been derived by applying a generic bandit algorithm and almost equivalent to what would have been achieved in the full information setting. Our results are enabled by analyzing a new online learning setting with outcome-based feedback, which generalizes learning with feedback graphs. We provide an online learning algorithm for this setting, of independent interest, with regret that grows only logarithmically with the number of actions and linearly only in the number of potential outcomes (the latter being very small in most auction settings). Last but not least, we show that our algorithm outperforms the bandit approach experimentally and that this performance is robust to dropping some of our theoretical assumptions or introducing noise in the feedback that the bidder receives.
We consider the problem of a single seller repeatedly selling a single item to a single buyer (specifically, the buyer has a value drawn fresh from known distribution D in every round). Prior work assumes that the buyer is fully rational and will perfectly reason about how their bids today affect the seller's decisions tomorrow. In this work we initiate a different direction: the buyer simply runs a no-regret learning algorithm over possible bids. We provide a fairly complete characterization of optimal auctions for the seller in this domain. Specifically: - If the buyer bids according to EXP3 (or any "mean-based" learning algorithm), then the seller can extract expected revenue arbitrarily close to the expected welfare. This auction is independent of the buyer's valuation D , but somewhat unnatural as it is sometimes in the buyer's interest to overbid. - There exists a learning algorithm A such that if the buyer bids according to A then the optimal strategy for the seller is simply to post the Myerson reserve for D every round. - If the buyer bids according to EXP3 (or any "mean-based" learning algorithm), but the seller is restricted to "natural" auction formats where overbidding is dominated (e.g. Generalized First-Price or Generalized Second-Price), then the optimal strategy for the seller is a pay-your-bid format with decreasing reserves over time. Moreover, the seller's optimal achievable revenue is characterized by a linear program, and can be unboundedly better than the best truthful auction yet simultaneously unboundedly worse than the expected welfare.
We study the problem of fair allocation for indivisible goods. We use the maxmin share paradigm introduced by Budish~\citeBudish:first as a measure for fairness. \procacciafirst ~\citeProcaccia:first were the first to investigate this fundamental problem in the additive setting. They show that a maxmin guarantee (1-$\MMS$ allocation) is not always possible even when the number of agents is limited to 3. While the existence of an approximation solution (e.g. a $1/2$-$\MMS$ allocation) is quite straightforward, improving the guarantee becomes subtler for larger constants. \sprocacciafirst ~\citeProcaccia:first provide a proof for the existence of a $2/3$-$\MMS$ allocation and leave the question open for better guarantees. Our main contribution is an answer to the above question. We improve the result of \sprocacciafirst~to a $3/4$ factor in the additive setting. The main idea for our $3/4$-$\MMS$ allocation method is clustering the agents. To this end, we introduce three notions and techniques, namely reducibility, matching allocation, and cycle-envy-freeness, and prove the approximation guarantee of our algorithm via non-trivial applications of these techniques. Our analysis involves coloring and double counting arguments that might be of independent interest. One major shortcoming of the current studies on fair allocation is the additivity assumption on the valuations. We alleviate this by extending our results to the case of submodular, fractionally subadditive, and subadditive settings. More precisely, we give constant approximation guarantees for submodular and XOS agents, and a logarithmic approximation for the case of subadditive agents. Furthermore, we complement our results by providing close upper bounds for each class of valuation functions. Finally, we present algorithms to find such allocations for additive, submodular, and XOS settings in polynomial time. The reader can find a summary of our results in Table \refresultstable.
We study the problem of allocating a set of indivisible goods among a set of agents in a fair and efficient manner. An allocation is said to be fair if it is envy-free up to one good (EF1), which means that each agent prefers its own bundle over the bundle of any other agent up to the removal of one good. In addition, an allocation is deemed efficient if it satisfies Pareto efficiency. While each of these well-studied properties is easy to achieve separately, achieving them together is far from obvious. Recently, Caragiannis et al. (2016) established the surprising result that when agents have additive valuations for the goods, there always exists an allocation that simultaneously satisfies these two seemingly incompatible properties. Specifically, they showed that an allocation that maximizes the Nash social welfare objective is both EF1 and Pareto efficient. However, the problem of maximizing Nash social welfare is NP-hard. As a result, this approach does not provide an efficient algorithm for finding a fair and efficient allocation. In this paper, we bypass this barrier, and develop a pseudopolynomial time algorithm for finding allocations that are EF1 and Pareto efficient; in particular, when the valuations are bounded, our algorithm finds such an allocation in polynomial time. Furthermore, we establish a stronger existence result compared to Caragiannis et al. (2016): For additive valuations, there always exists an allocation that is EF1 and fractionally Pareto efficient. Another key contribution of our work is to show that our algorithm provides a polynomial-time 1.45-approximation to the Nash social welfare objective. This improves upon the best known approximation ratio for this problem (namely, the 2-approximation algorithm of Cole et al., 2017), and also matches the lower bound on the integrality gap of the convex program of Cole et al. (2017). Unlike many of the existing approaches, our algorithm is completely combinatorial, and relies on constructing integral Fisher markets wherein specific equilibria are not only efficient, but also fair.
We consider the problem of fairly allocating indivisible public goods. We model the public goods as elements with feasibility constraints on what subsets of elements can be chosen, and assume that agents have additive utilities across elements. Our model generalizes existing frameworks such as fair public decision making and participatory budgeting. We study a groupwise fairness notion called the core, which generalizes well-studied notions of proportionality and Pareto efficiency, and requires that each subset of agents must receive an outcome that is fair relative to its size. In contrast to the case of divisible public goods (where fractional allocations are permitted), the core is not guaranteed to exist when allocating indivisible public goods. Our primary contributions are the notion of an additive approximation to the core (with a tiny multiplicative loss), and polynomial time algorithms that achieve a small additive approximation, where the additive factor is relative to the largest utility of an agent for an element. If the feasibility constraints define a matroid, we show an additive approximation of 2. A similar approach yields a constant additive bound when the feasibility constraints define a matching. For feasibility constraints defining an arbitrary packing polytope with mild restrictions, we show an additive guarantee that is logarithmic in the width of the polytope. Our algorithms are based on the convex program for maximizing the Nash social welfare, but differ significantly from previous work in how it is used. As far as we are aware, our work is the first to approximate the core in indivisible settings.
We study the dynamic fair division of indivisible goods. Suppose T items arrive online and must be allocated upon arrival to one of n agents, each of whom has a value in [0,1] for the current item. Our goal is to design allocation algorithms that minimize the maximum envy at time T , ENVYT, defined as the maximum difference between any agent's overall value for items allocated to another agent and to herself. We say that an algorithm has vanishing envy if the ratio of envy over time, ENVYT/T, goes to zero as T goes to infinity. We design a polynomial-time, deterministic algorithm that achieves ENVYT ın ~O ( √T/n ), and show that this guarantee is asymptotically optimal. We also derive tight (in T ) bounds for a more general setting where items arrive in batches.
Even when global income redistribution is not feasible, market designers can seek to mitigate inequality within individual markets. If sellers are systematically poorer than buyers, for example, they will be willing to sell at relatively low prices. Yet a designer who cares about inequality might prefer to set higher prices precisely when sellers are poor -- effectively, using the market as a redistributive tool. In this paper, we seek to understand how to design goods markets optimally in the presence of persistent inequality. Using a mechanism design approach, we find that redistribution through markets can indeed be optimal. When there is substantial inequality across sides of the market, the designer uses a tax-like mechanism, introducing a wedge between the buyer and seller prices, and redistributing the resulting surplus to the poorer side of the market via lump-sum payments. When there is significant within-side inequality, meanwhile, the designer imposes price controls even though doing so induces rationing.
Platforms facilitating the exchange of goods and services between individuals are prevalent: one can purchase goods from others on eBay, arrange accommodation through Airbnb, and find temporary projects/workers on online labor markets such as Upwork. The majority of these markets exhibit three key features. First, the platforms do not dictate the transaction prices, i.e., buyers and sellers determine at which price the goods/services will be exchanged. Second, not all buyers or sellers on a platform are compatible. This may be due to taste differences (a buyer may be interested only in the types of goods/services a subset of the sellers offer), geographical or import/export restrictions (e.g., being able to provide services only regionally), or other sources of mismatch (e.g., a mismatch in the desired and available skills in online labor markets). Finally, buyers/sellers are heterogeneous in their valuations for goods or services they receive/provide.
In exchange for facilitating trade, these platforms commonly obtain a commission from each transaction and/or charge subscription fees to sellers and buyers who access the platform. As such, their revenues depend both on the chosen commission/subscription fees and on the prices at which buyers/sellers transact. In settings that exhibit the aforementioned features, how should a platform design commission/subscription fees with the objective of maximizing its revenues? Is it sufficient to charge these to only one side of the market or is it necessary to charge them to both sides? What is the role of the underlying compatibility structure, and which structures are more conducive to higher revenues? In order to answer these questions, we consider a model where potential buyers and sellers are divided into types. Not all buyer and seller types are compatible with each other, and we represent the compatibility across these types using a bipartite network: nodes on one side correspond to buyer types, nodes on the other side correspond to seller types, and edges capture compatibility between different types of buyers and sellers. Agents within each type differ in their valuation for the goods they buy/sell. The platform chooses subscription fees, which must be paid by all agents who participate in the market. In addition, it chooses commissions, and a buyer/seller who exchanges goods/services pays a fraction of the transaction price to the platform as indicated by these commissions. The value distributions, subscription fees, and commissions are all possibly type-specific. Given the commissions/subscriptions chosen by the platform, the transaction prices and equilibrium trades are formed endogenously at a competitive equilibrium.
We establish that, in order to maximize its revenues, the platform may need to charge different commissions/subscriptions to different types, depending on their network position. In fact, we show that if the same commissions/subscriptions are employed for all agents on the same side, the revenue loss can be unbounded. We complement this worst-case result by providing a bound on the revenue loss in terms of the supply/demand imbalance across the network under homogeneous value distributions. Surprisingly, we also show that, in general, charging commissions/subscriptions to only one side of the market (i.e., only to buyers or only to sellers) leads to lower revenues than optimal, even when different types on the same side can be charged different fees. Furthermore, we characterize the impact of the network structure on the revenues of the platform. Finally, we investigate how the commissions/subscriptions chosen by the platform impact social welfare. We establish that, under mild convexity assumptions on the value distributions, the revenue-maximizing commissions/subscriptions induce at least 2/3 of the maximum achievable social welfare.
We show how frictions and continuous transfers jointly affect equilibria in a model of matching in trading networks. Our model incorporates distortionary frictions such as transaction taxes, bargaining costs, and incomplete markets. When contracts are fully substitutable for firms, competitive equilibria exist and coincide with outcomes that satisfy a cooperative stability property called trail stability. In the presence of frictions, competitive equilibria might be neither stable nor (constrained) Pareto-efficient. In the absence of frictions, on the other hand, competitive equilibria are stable and in the core, even if utility is imperfectly transferable.
We consider general trading networks with bilateral contracts. We show that a suitably adapted chain stability concept is equivalent to stability if all agents' preferences are jointly fully substitutable and satisfy the Laws of Aggregate Supply and Demand (a condition we call monotone - substitutability). We also present three examples to show that are results are sharp, demonstrating that: If preferences of some agents do not satisfy the Laws of Aggregate Supply and Demand, then chain stable outcomes may not be stable. " If preferences of some agents are not fully substitutable, then chain stable outcomes may likewise not be stable. " If blocking sets are restricted to chains that do not "cross" themselves (i.e., chains that involve each agent in at most two contracts), then an outcome that is robust to such blocks may not be robust to richer blocks. Our results imply that in trading networks with transferable utility, an outcome is consistent with competitive equilibrium if and only if it is not blocked by any chain of contracts. We show moreover that, from a computational perspective, checking whether an outcome is chain stable is substantially easier than checking whether that outcome is stable directly. Indeed, we show that as the size of the economy grows, the number of chains of trades (corresponding to possible blocking chains) becomes infinitely smaller than the number of general sets of trades (corresponding to possible blocking sets).
The DeGroot model of naive social learning assumes that agents only communicate scalar opinions. In practice, agents communicate not only their opinions, but their confidence in such opinions. We propose a model that captures this aspect of communication by incorporating signal informativeness into the naive social learning scenario. Our proposed model captures aspects of both Bayesian and naive learning. Agents in our model combine their neighbors' beliefs using Bayes' rule, but the agents naively assume that their neighbors' beliefs are independent. Depending on the initial beliefs, agents in our model may not reach a consensus, but we show that the agents will reach a consensus under mild continuity and boundedness assumptions on initial beliefs. This eventual consensus can be explicitly computed in terms of each agent's centrality and signal informativeness, allowing joint effects to be precisely understood. We apply our theory to adoption of new technology. In contrast to Banerjee et al. [2018], we show that information about a new technology can be seeded initially in a tightly clustered group without information loss, but only if agents can expressively communicate their beliefs.
Bayesian agents learn about a moving target, such as a commodity price, using private signals and their network neighbors' estimates. The weights agents place on these sources of information are endogenously determined by the precisions and correlations of the sources; the weights, in turn, determine future correlations. We study stationary equilibria\textemdash ones in which all of these quantities are constant over time. Equilibria in linear updating rules always exist, yielding a Bayesian learning model as tractable as the commonly-used DeGroot heuristic. Equilibria and the comparative statics of learning outcomes can be readily computed even in large networks. Substantively, we identify pervasive inefficiencies in Bayesian learning. In any stationary equilibrium where agents put positive weights on neighbors' actions, learning is Pareto inefficient in a generic network: agents rationally overuse social information and underuse their private signals.
We consider social learning settings in which a group of agents face uncertainty regarding a state of the world, observe private signals, share the same utility function, and act in a general dynamic setting. We introduce Social Learning Equilibria, a static equilibrium concept that abstracts away from the details of the given dynamics, but nevertheless captures the corresponding asymptotic equilibrium behavior. We establish strong equilibrium properties on agreement, herding, and information aggregation.
Identifying the optimal set of individuals to first receive information (`seeds') in a social network is a widely-studied question in many settings, such as the diffusion of information, microfinance programs, and new technologies. Numerous studies have proposed various network-centrality based heuristics to choose seeds in a way that is likely to boost diffusion. Here we show that, for some frequently studied diffusion processes, randomly seeding S + x individuals can prompt a larger cascade than optimally targeting the best S individuals, for a small x. We prove our results for large classes of random networks, but also show that they hold in simulations over several real-world networks. This suggests that the returns to collecting and analyzing network information to identify the optimal seeds may not be economically significant. Given these findings, practitioners interested in communicating a message to a large number of people may wish to compare the cost of network-based targeting to that of slightly expanding initial outreach.
Gross substitutability is a central concept in Economics and is connected to important notions in Discrete Convex Analysis, Number Theory and the analysis of Greedy algorithms in Computer Science. Many different characterizations are known for this class, but providing a constructive description remains a major open problem. The construction problem asks how to construct all gross substitutes from a class of simpler functions using a set of operations. Since gross substitutes are a natural generalization of matroids to real-valued functions, matroid rank functions form a desirable such class of simpler functions. Shioura proved that a rich class of gross substitutes can be expressed as sums of matroid rank functions, but it is open whether all gross substitutes can be constructed this way. Our main result is a negative answer showing that some gross substitutes cannot be expressed as positive linear combinations of matroid rank functions. En route, we provide necessary and sufficient conditions for the sum to preserve substitutability, uncover a new operation preserving substitutability and fully describe all substitutes with at most 4 items.
We study mechanism design for procurement auctions in which the goal is to buy a subset of items or hire a team of providers. In order to measure the efficiency of a mechanism, one defines an appropriate benchmark which denotes a reasonable expectation of the payments and defines the overpayment of a mechanism based on the benchmark. This ratio is called the \em frugality ratio of the mechanism. Procurement auctions are well-studied and benchmarks proposed for these auctions have evolved over a sequence of papers ~\citearcher2007frugal,chen2010frugal,elkind2007frugality,karlinbeyond,kempe2010frugal. In this work, we introduce a newer benchmark, and based on that, study classic procurement auctions. Our benchmark addresses critical issues raised by the unintuitive behavior of the previous benchmarks. We show two attractive properties for our benchmark which have been lacking in the previous proposals: \em monotonicity and \em smoothness. Based on our benchmark, we provide positive results for vertex cover and knapsack auctions. Prior to this work, \kempefrugal\citekempe2010frugal propose a constant approximation mechanism for vertex cover auctions. However, their analysis suffers from an error. We give a correct analysis to the mechanism of \kempefrugal\citekempe2010frugal with respect to our benchmark. In particular, we prove their mechanism is optimal up to a constant factor. Our analysis is different from what \kempefrugal\citekempe2010frugal propose. We also study the knapsack auctions and give a truthful mechanism for such auctions with a bounded frugality ratio. We show that this is almost tight by presenting a lower bound on the frugality ratio of any truthful mechanism for such auctions. All our results depend on both properties of the benchmark.
Unit demand auctions power today's search and native ad marketplaces. Traditional implementations make an extreme "separability" assumption: the relative value of any two ad slots is the same for all advertisers. Under this assumption, the optimal assignment problem can be conveniently solved simply by sorting; without it, efficient allocation requires solving a full-blown weighted matching problem. Motivated by prior work and our own empirical evidence against separability, we abandon that assumption and tackle the algorithmic problems of assignment and pricing for general unit demand ad auctions. Instead of computing prices directly, we take a novel approach and compute bidders' full allocation curves---complete mappings from each agent's bid space to their allocation under the optimal assignment function---from which it is trivial to compute most prices of interest, like those of the Generalized Second Price (GSP) or Vickrey-Clarke-Groves (VCG) auctions. Remarkably, we show that these full allocation curves (and therefore prices) can be computed in the same asymptotic runtime required to compute the optimal matching alone.
We study the efficiency of mechanisms for allocating a divisible resource. Given scalar signals submitted by all users, such a mechanism decides the fraction of the resource that each user will receive and a payment that will be collected from her. Users are self-interested and aim to maximize their utility (defined as their value for the resource fraction they receive minus their payment). Starting with the seminal work of Johari and Tsitsiklis [Operations Research, 2004], a long list of papers studied the price of anarchy (in terms of the social welfare --- the total users' value) of resource allocation mechanisms for a variety of allocation and payment rules. Here, we further assume that each user has a budget constraint that invalidates strategies that yield a payment that is higher than the user's budget. This subtle assumption, which is arguably more realistic, constitutes the traditional price of anarchy analysis meaningless as the set of equilibria may change drastically and their social welfare can be arbitrarily far from optimal. Instead, we study the price of anarchy using the liquid welfare benchmark that measures efficiency taking budget constraints into account. We show a tight bound of 2 on the liquid price of anarchy of the well-known Kelly mechanism and prove that this result is essentially best possible among all multi-user resource allocation mechanisms. This comes in sharp contrast to the no-budget setting where there are mechanisms that considerably outperform Kelly in terms of social welfare and even achieve full efficiency. In our proofs, we exploit the particular structure of worst-case games and equilibria, which also allows us to design (nearly) optimal two-player mechanisms by solving simple differential equations.