International Joint Conference on Theoretical Computer Science – Frontier of Algorithmic Wisdom

August 14 - 18, 2023, University of Macau, Macau SAR, China

 

Invited Speakers

Track B (Blockchain)

  • When Blockchain Meets AI ...

    Kani Chen (Hong Kong University of Science and Technology)

    Abstract

    Blockchain and AI are disruptive yet conflicting technologies. Blockchain enables decentralized, secure ledgers, while AI allows adaptive, human-like or even super-human systems, and together they could help democratizing data and building trustworthy smart systems. Blockchain aids collaborative AI development, including federated learning and smart contracts for AI models, and AI strengthens blockchain security and scalability. However, the state-of-the-art AI requires huge amount of data and computing resources and colossal sizes of models, making it only centralized, and blockchain encourages a totally different route of decentralization. Decentralized blockchains are hard to tame wild AI and form decentralized AI. Uncontrolled AI on blockchains would also introduce risks of system manipulation and losing human oversight. The two technologies are simultaneously complementary and conflicting. With right human intervention, they can encourage complement over conflict. Blockchain and AI can be designed to work in harmony and progress simultaneously benefiting each other and benefiting the mankind.

  • Decentralized Technologies for Web3.0 Applications

    James Zhibin Lei (Hong Kong Applied Science and Technology Research Institute)

    Abstract

    TBA

  • Fault-tolerant and Expressive Cross-Chain Swaps

    Yingjie Xue (Hong Kong University of Science and Technology (Guangzhou))

    Abstract

    Cross-chain swaps enable exchange of different assets that reside on different blockchains. Several protocols have been proposed for atomic cross-chain swaps. However, those protocols are not fault tolerant, in the sense that if any party deviates, no asset transfer can happen. In this paper, we propose two alternative protocols for structuring composable and robust cross-chain swaps. Participants can propose multiple swaps simultaneously and then complete a subset of those swaps according to their needs. Their needs are expressed as predicates which capture acceptable payoff of each participant. Our proposed protocols are thus more expressive due to the introduction of predicates. The proposed protocols are fault tolerant since, even if some participants deviate, those predicates can still be satisfied, and conforming parties can complete an acceptable set of swaps.

  • Hierarchical Cryptographic Wallets: Techniques and Challenges

    Guomin Yang (Singapore Management University)

    Abstract

    With infinitely many high-quality data points, infinite computational power, an infinitely large foundation model with a perfect training algorithm and guaranteed zero generalization error on the pretext task, can the model be used for everything? This question cannot be answered by the existing theory of representation, optimization or generalization, because the issues they mainly investigate are assumed to be nonexistent here. In this paper, we show that category theory provides powerful machinery to answer this question. We have proved three results. The first one limits the power of prompt-based learning, saying that the model can solve a downstream task with prompts if and only if the task is representable. The second one says fine tuning does not have this limit, as a foundation model with the minimum required power (up to symmetry) can theoretically solve downstream tasks for the category defined by pretext task, with fine tuning and enough resources. Our final result can be seen as a new type of generalization theorem, showing that the foundation model can generate unseen objects from the target category (e.g., images) using the structural information from the source category (e.g., texts). Along the way, we provide a categorical framework for supervised and self-supervised learning, which might be of independent interest.

Track D (Learning Theory)

  • Towards Understanding the Generalization of Graph Neural Networks

    Yong Liu (Remin University of China)

    Abstract

    Graph neural networks (GNNs) are the most widely adopted model in graph representation learning. Despite their extraordinary success in real-world applications, understanding their working mechanism by theory is still on primary stage. In this topic, we move towards this goal from the perspective of generalization. To be specific, we establish high probability bounds of generalization gap and gradients under transductive setting with consideration of stochastic optimization. The theoretical results reveal the architecture specific factors affecting the generalization gap. Experimental results on benchmark datasets show the consistency between theoretical results and empirical evidence. Our results provide new insights in understanding the generalization of GNNs.

  • On the Power of Foundation Models

    Yang Yuan (Tsinghua University)

    Abstract

    Cryptographic wallets are an essential tool for storing and managing the cryptographic keys to access crypto assets. Hierarchical cryptographic wallets, such as the Hierarchical Deterministic Wallet proposed in the Bitcoin Improvement Proposal 32 (BIP32), and the Stealth Address mechanism used by a few privacy-focused cryptocurrencies, have become very popular due to their advantages in easy wallet backup and recovery, convenient key generation and management, supporting trust-less audit, and so on. In this talk, we will provide a review on the existing hierarchical wallet schemes and their approaches to achieve security and privacy, followed by some research challenges and directions.

  • Two Phases of Scaling Laws for Nearest Neighbor Classifiers

    Jingzhao Zhang (Tsinghua University)

    Abstract

    A scaling law refers to the observation that the test performance of a model improves as the number of training data increases. A fast scaling law implies that one can solve machine learning problems by simply boosting the data and the model sizes. Yet, in many cases, the benefit of adding more data can be negligible. In this work, we study the rate of scaling laws of nearest neighbor classifiers. We show that a scaling law can have two phases: in the first phase, the generalization error depends polynomially on the data dimension and decreases fast; whereas in the second phase, the error depends exponentially on the data dimension and decreases slowly. Our analysis highlights the complexity of the data distribution in determining the generalization error. When the data distributes benignly, our result suggests that nearest neighbor classifier can achieve a generalization error that depends polynomially, instead of exponentially, on the data dimension.

Track E (Quantum Computing)

  • Discussion on Quantum Computing Software

    Man-Hong Yung (Southern University of Science and Technology)

    Abstract

    Quantum computation represents a disruptive computing paradigm that is rapidly developing and has entered the stage of engineering. Prototypes of quantum computing devices manufactured in the industry have already demonstrated the potential to surpass classical computers. To harness the computing power of quantum computers, a complete quantum computing system needs to be built at the same time, which includes not only quantum hardware but also quantum software and algorithms. This report focuses on the development of quantum software, including discussions on the current status and trends of quantum technology, as well as the progress of the team in developing MindSpore Quantum, a new quantum-classical hybrid computing framework launched by the Ascend MindSpore community. MindSpore Quantum efficiently supports mainstream quantum algorithms and classical-quantum hybrid algorithms, and can speed up applications in variational quantum algorithms such as quantum machine learning, quantum chemical simulation, and quantum combinatorial optimization. It also leads in the scale of quantum chemical simulation, providing an efficient development environment for quantum computing research and development.

  • Near-Future Quantum Computing: Advancements in Circuit Optimization

    Shengyu Zhang (Tencent Quantum Lab)

    Abstract

    Quantum computing promises significant speedup over classical counterpart for certain computational problems; however, current quantum computers grapple with issues of scale, connectivity, and noise. In this talk, we will discuss recent advancements in optimizing the size and depth of quantum circuits for near-term quantum computers. We will present tight bounds for the depth and size of quantum circuits in crucial tasks such as quantum state preparation, diagonal unitaries, 2-by-2 block diagonal unitaries, and general unitary synthesis.

Track G (Multi-agent Learning, Multi-agent System, Multi-agent Games)

  • Revenue-Maximizing Mechanism in Bilateral Trade with Interdependent Valuations

    Weiran Shen (Renmin University of China)

    Abstract

    We study the revenue-maximizing mechanism in bilateral trade with interdependent valuations. The seller privately observes the item's quality, while the buyer privately knows his type of valuation. The seller's valuation only depends on the quality, and the buyer's valuation depends on both the quality and his type. The trade goes through an uninformed mediator. The mediator ex-ante designs a mechanism to collect or disclose information, as well as sell information and levy fees. Information exchange or sales may occur over multiple rounds, and players can exit anytime. We characterize a simple direct mechanism that maximizes revenue. In this mechanism, both players report their types, and the mediator directly recommends whether to trade. Moreover, the recommendation rule takes a threshold structure.

  • TBA

    Tao Xiao (Huawei TCS Lab)

    Abstract

    TBA

Track H (Conscious AI)

  • A Framework for a Conscious AI: A Short Introduction to Conscious Turing Machine

    Lenore Blum, Manuel Blum, Yurong Chen

    Abstract

    This talk will briefly introduce the Conscious Turing Machine, a TCS model proposed by Lenore Blum and Manuel Blum, that examines consciousness from the perspective of theoretical computer science (TCS). The CTM is influenced by Alan Turing's simple yet powerful model of computation, the Turing machine (TM), and by the global workspace theory (GWT) of consciousness originated by cognitive neuroscientist Bernard Baars and further developed by him, Stanislas Dehaene, Jean-Pierre Changeux, George Mashour, and others.
    However, the CTM is not a standard Turing Machine. It is not the input-output map that gives the CTM its feeling of consciousness, but what is under the hood. Nor is the CTM a standard GW model. In addition to its architecture, what gives the CTM its feeling of consciousness is its predictive dynamics (cycles of prediction, feedback, and learning), its internal multi-modal language Brainish, and certain special Long Term Memory (LTM) processors, including its Inner Speech and Model of the World processors.
    Phenomena generally associated with consciousness, such as blindsight, inattentional blindness, change blindness, dream creation, and free will, are considered. Explanations derived from the model draw confirmation from consistencies at a high level, well above the level of neurons, with the cognitive neuroscience literature.

  • A possible mechanism for CTM generating self-conscious

    Shaoyang Cui (Peking University)

    Abstract

    The desire for a better understanding of consciousness comes out not only from philosophers and psychologists but also from computer scientists. Manuel Blum and Lenore Blum propose a formal TCS (theoretical computer science) model called Conscious Turing Machine (CTM) which is a conscious model inspired by the Global Workspace Theory of Bernard Baars. We attempt to discuss about consciousness of CTM and focusing on self-consciousness by giving a clear definition of it and designing a possible model of Model-of-the-World processor. Finally, we use some illusions and disorders to explain out MotW processor model.

  • A Unified CTM Framework of Delusion

    Hengli Li (Peking University)

    Abstract

    In this paper, we focus on the CTM perspective for explaining an important psychological phenomenon: delusion. We identify key elements responsible for delusion and propose a corresponding CTM unified framework. We believe in the CTM model, both the malfunction of the MotW processor and the impairment of the message-passing structure are required for the occurrence of delusion. We then conduct thorough case studies, presenting neuropsychology explanations and CTM interpretations for several representative delusions. This examination also shed light on our understanding of healthy CTMs.

  • How does CTM learn knowledge and skills?

    Ningyuan Li (Peking University)

    Abstract

    We try to explain the process through which a CTM learns new knowledge and skills. We discuss the representation of knowledge and skills in CTM and describe the CTM’s learning process driven by a motivation mechanism based on the predictive dynamics.

  • An Illustration of Causal Reasoning from a Conscious Turing Machine Perspective

    Xingyu Liu, Xizhi Xiao (Peking University)

    Abstract

    Causal reasoning is the process of identifying the relationship between a cause and its effect. It is closely related to consciousness. In this talk, we propose a unified probabilistic mechanism for conscious and unconscious causal reasoning in Conscious Turing Machine (CTM). With this mechanism, CTM can conduct basic causal reasoning by fusing the Bayesian inference and the chain rule of probability. Additionally, we use this mechanism to explain interesting phenomena such as the “explain-away” effect and the conjunction fallacy. Finally, beyond basic causal reasoning, we propose a probabilistic approach to conduct inductive and deductive reasoning in CTM.

Young Faculty in TCS

  • Sparse Fourier Transform in the continuous setting

    Xue Chen (University of Science and Technology of China)

    Abstract

    The Fourier transform is a ubiquitous tool in data processing and analysis. Many real-world signals are approximately sparse in the frequency domain, and this can be exploited to speed up the computation. While this problem, called sparse Fourier transform, has been well studied in the discrete setting, its counterpart for reconstructing Fourier-sparse signals in the continuous setting was still poorly understood until recently. Classical methods, such as Prony’s method, the Matrix-Pencil method, and the Shannon-Nyquist sampling, are either known to be notoriously sensitive to noise or require a large number of samples. In this talk, we will introduce our recent framework that provides the 1st efficient and robust algorithm with provable guarantee for reconstructing Fourier-sparse signals with arbitrary frequencies. In contrast, previous approaches for continuous Fourier-sparse signals would work only if those frequencies in the signal are well-separated. While this separation is necessary to recover those frequencies, we proved that it is feasible to reconstruct the signal without the separation even under adversarial noise. Moreover, our framework also works for general Fourier spectrum such as multi-band and Gaussian mixtures, since those spectrums could be reduced to the sparse case by standard techniques. Based on join work with Daniel Kane, Eric Price, and Zhao Song.

  • Mobility Data in Operations: The Facility Location Problem

    Yiding Feng (Microsoft Research New England)

    Abstract

    The recent large scale availability of mobility data, which captures individual mobility patterns, poses novel operational problems that are exciting and challenging. Motivated by this, we introduce and study a variant of the (cost-minimization) facility location problem where each individual is endowed with two locations (hereafter, her home and work locations), and the connection cost is the minimum distance between any of her locations and its closest facility. We design a polynomial-time algorithm whose approximation ratio is at most 3.103. We complement this positive result by showing that the proposed algorithm is at least a 3.073-approximation, and there exists no polynomial-time algorithm with approximation ratio $2 - \epsilon$ under UG-hardness. We further extend our results and analysis to the model where each individual is endowed with K locations. Finally, we conduct numerical experiments over both synthetic data and US census data (for NYC, greater LA, greater DC, Research Triangle) and evaluate the performance of our algorithms. This is the joint work with Ozan Candogan.

  • On optimal coresets for clustering in Euclidean spaces

    Lingxiao Huang (Nanjing University)

    Abstract

    Coreset construction for clustering problems in Euclidean spaces has been a prominent research topic over the last decade. One of the key challenges in the field of coresets is determining the smallest possible coreset size for Euclidean clustering. In this presentation, we will discuss our recent breakthrough in solving this fundamental problem. We present optimal coreset sizes for both one-dimensional case and high-dimensional case encompassing a broad spectrum of parameter ranges. Our findings shed new light on the theoretical foundations of coreset construction for Euclidean clustering.

  • Weitz's algorithm revisited via Barvinok's polynomial interpolation method

    Shuai Shao (University of Science and Technology of China)

    Abstract

    In this talk, we show that for the general 2-spin system defined on a family of graphs closed under taking sub-graphs, if the non-trivial zeros of the partition functions of a 2-spin system are outside a neighborhood of the origin on the complex plane, then the property of strong spatial mixing holds for this 2-spin system. Moreover, we present a new approach to compute the marginal probability used in Weitz’s algorithm via Barvinok’s interpolation method. Together, we establish that if Barvinok’s algorithm is valid for a 2-spin system on a family of graphs closed under taking sub-graphs, then Weitz’s algorithm works for this 2-spin system. Based on joint work with Xiaowei Ye.

  • Approximating equilibrium under constrained piecewise linear concave utilities

    Yixin Tao (Shanghai University of Finance and Economics)

    Abstract

    This talk focuses on Fisher markets, a special case of general equilibrium theory that achieves efficient and fair allocations for divisible goods. A new class of utility functions, called constrained piecewise linear concave (PLC) utility functions, will be introduced. This general class captures many well-studied special cases, including markets with PLC utilities, markets with satiation, and matching markets. For the special case of PLC utilities, although the problem is PPAD-hard, Devanur and Kannan (FOCS 2008) gave a polynomial-time algorithm when the number of goods is constant. Our main result is a fixed parameter approximation scheme for computing an approximate equilibrium, where the parameters are the number of agents and the approximation accuracy. This is a joint work with Jugal Garg and László Végh.

  • Spectral Sparsification, Spectral Rounding, and Matrix Discrepancy

    Hong Zhou (Fuzhou University)

    Abstract

    The spectral sparsification problem introduced by Spielman and Teng is a powerful tool in algorithmic spectral graph theory, which has inspired rapid developments in designing fast graph algorithms in the last decade. In this talk, we will discuss a problem called spectral rounding, which is an interesting variant of spectral sparsification. We show that spectral rounding is powerful in designing approximation algorithms for classical problems such as network design and experimental design. Finally, we will talk about connections between spectral rounding and a family of matrix discrepancy problems.

Undergraduate Forum

  • Sybil-Proof Diffusion Auction in Social Networks

    Ying Wang (Peking University)

    Abstract

    A diffusion auction is a market to sell commodities over a social network, where the challenge is to incentivize buyers to invite their network neighbors to join the market. Existing mechanisms address the challenge in various settings, targeting properties like non-deficiency, incentive compatibility, and social welfare maximization. In dynamic networks with ever-changing structures, buyers can create fake nodes to manipulate mechanisms for personal gain, which is commonly known as the Sybil attack. We observe that Sybil attackers may gain an unfair advantage in existing mechanisms. To resist this potential attack, we propose two diffusion auction mechanisms, the Sybil tax mechanism (STM) and the Sybil cluster mechanism (SCM), to achieve both Sybil-proofness and incentive compatibility in the single-item setting. Our proposal provides the first mechanisms to protect the interests of buyers against Sybil attacks with a mild sacrifice of social welfare and revenue.

  • Incentive Ratios for Fairly Allocating Indivisible Goods: Simple Mechanisms Prevail

    Mingwei Yang (Peking University)

    Abstract

    We study the problem of fairly allocating indivisible goods among strategic agents. Amanatidis et al. [ABCM17] show that truthfulness is incompatible with any meaningful fairness notions, ruling out the possibility to design truthful and fair mechanisms. However, due to computational inefficiency and the lack of required information, it is not lucrative for an agent to manipulate his valuation in a mechanism that is close to being truthful. This observation prompts us to quantitatively study the degree of untruthfulness for fair mechanisms. Specifically, we adopt the notion of incentive ratio, which is defined as the ratio between the largest possible utility that an agent can gain by manipulation and his utility in honest behavior under a given mechanism. We select four of the most fundamental mechanisms in the literature on discrete fair division, which are Round-Robin, a cut-and-choose mechanism of Plaut and Roughgarden [PR20], Maximum-Nash-Welfare and Envy-Graph Procedure, and obtain extensive results regarding the incentive ratios of them and their variants. Finally, we complement our results with a proof that the incentive ratio of every mechanism satisfying envy-freeness up to one good is at least 1.074, and thus is larger than 1 by a constant.

  • Robust Generalization Requires Exponentially Large Models

    Binghui Li (Peking University)

    Abstract

    It is well-known that modern neural networks are vulnerable to adversarial examples. To mitigate this problem, a series of robust learning algorithms have been proposed. However, although the robust training error can be near zero via some methods, all existing algorithms lead to a high robust generalization error. In this paper, we provide a theoretical understanding of this puzzling phenomenon from the perspective of expressive power for deep neural networks. Specifically, for binary classification problems with well-separated data, we show that, for ReLU networks, while mild over-parameterization is sufficient for high robust training accuracy, there exists a constant robust generalization gap unless the size of the neural network is exponential in the data dimension d. This result holds even if the data is linear separable (which means achieving standard generalization is easy), and more generally for any parameterized function classes as long as their VC dimension is at most polynomial in the number of parameters. Moreover, we establish an improved upper bound of exp(O(k)) for the network size to achieve low robust generalization error when the data lies on a manifold with intrinsic dimension k (k≪d). Nonetheless, we also have a lower bound that grows exponentially with respect to k -- the curse of dimensionality is inevitable. By demonstrating an exponential separation between the network size for achieving low robust training and generalization error, our results reveal that the hardness of robust generalization may stem from the expressive power of practical models.

  • The Search-and-Mix Paradigm in Approximate Nash Equilibrium Algorithms

    Dongchen Li (Peking University)

    Abstract

    Finding approximate Nash equilibria in bimatrix games is a well-followed problem in algorithmic game theory. In the literature, a series of polynomial-time algorithms have been proposed with improving approximation bounds. From the perspective of artificial intelligence, an interesting question is whether we can use computer programs exclusively to design and analyze algorithms of this kind. Our work provides a partial solution to this problem. We observe that such algorithms take a search-and-mix approach, which involves a search phase and a subsequent mixing phase. We fully automatize the procedure of designing and analyzing the mixing phase. For example, we illustrate how to reformulate the mixing phases of all the algorithms in the literature and calculate their approximation bound using computer programs.

  • Statistical Non-Interactive Zero-Knowledge Batch Verification: Enhancing Communication Complexity with Logarithmic Aggregation

    Changrui Mu (National University of Singapore)

    Abstract

    In the domain of statistical zero-knowledge (SZK) proofs, it is of great interest to evaluate whether one can efficiently verify a batch of k instances x_1,...,x_k as belonging to a promising problem Π with improved communication complexity, in an SZK proof, beyond the simple method of conducting the SZK protocol separately for each input. A work by Kaslasi et al. [KRR+20] (https://eprint.iacr.org/2020/1167) first presented a private-coin batch verification protocol for any problem with a non-interactive SZK (NISZK) proof-system for honest verifier. [KRV21] (https://eprint.iacr.org/2021/233) extends the batch verification for NISZK by constructing a public-coin malicious-verifier SZK protocol specifically for batch verification of NISZK with communication complexity k + poly(m) · polylog(k,m), where m is the communication of NISZK for one instance. However, their solution was hindered by two limitations: the protocol was interactive, and the communication cost linearly increased with the number of batched instances, k. In this work, we eliminate these two drawbacks by constructing a non-interactive SZK for $k$ Instances with communication cost poly(m) · polylog(k,m), consequently resolving two open problems in [KRR+20] (https://eprint.iacr.org/2020/1167).

Young PhD Forum

  • Budget-Constrained Auctions with Unassured Priors: Strategic Equivalence and Structural Properties

    Zhaohua Chen (Peking University)

    Abstract

    In today's online advertising markets, it is common for advertisers to set long-term budgets. Correspondingly, advertising platforms adopt budget control methods to ensure that advertisers' payments lie within their budgets. Most budget control methods rely on the value distributions of advertisers. However, due to the complex advertising landscape and potential privacy concerns, the platform hardly learns advertisers' true priors. Thus, it is crucial to understand how budget control auction mechanisms perform under unassured priors. This work answers this problem from multiple aspects. Specifically, we examine five budget-constrained parameterized mechanisms: bid-discount/ pacing first-price/second-price auctions and the Bayesian revenue-optimal auction. We consider the game among the seller and all buyers induced by these five mechanisms in the stochastic model. We restrict the parameterized mechanisms to satisfy the efficiency condition, which maximizes the seller's revenue by extracting buyers' budgets as effectively as possible. Our main result shows that the Bayesian revenue-optimal mechanism and the efficient bid-discount first-price mechanism yield the same set of Nash equilibrium outcomes. In the symmetric case, we further show that all these (efficient) mechanisms share the same set of possible outcomes. We further dig into the structural properties of these five mechanisms. We characterize sufficient and necessary conditions on the efficient parameter tuple for bid-discount/pacing first-price auctions. Meanwhile, when buyers do not take strategic behaviors, we exploit the dominance relationships of these mechanisms by revealing their intrinsic structures. In summary, our results establish vast connections among budget-constrained auctions with unassured priors and explore their structural properties, particularly highlighting the advantages of first-price mechanisms.

  • Near-Optimal Quantum Coreset Construction Algorithms for Clustering

    Yecheng Xue (Peking University)

    Abstract

    k-Clustering in \mathbb{R}^d (e.g., k-median and k-means) is a fundamental machine learning problem. While near-linear time approximation algorithms were known in the classical setting for a dataset with cardinality n, it remains open to find sublinear-time quantum algorithms. We give quantum algorithms that find coresets for k-clustering in \mathbb{R}^d with \tilde{O}(\sqrt{nk}d^{3/2}) query complexity. Our coreset reduces the input size from n to \mathrm{poly}(k\epsilon^{-1}d), so that existing \alpha-approximation algorithms for clustering can run on top of it and yield (1 + \epsilon)\alpha-approximation. This eventually yields a quadratic speedup for various k-clustering approximation algorithms. We complement our algorithm with a nearly matching lower bound, that any quantum algorithm must make \Omega(\sqrt{nk}) queries in order to achieve even O(1)-approximation for k-clustering.

  • Improved Algorithms for Online Rent Minimization Problem Under Unit-Size Jobs

    Zonghan Yang (Shanghai Jiao Tong University)

    Abstract

    We consider the Online Rent Minimization problem, where online jobs with release times, deadlines, and processing times must be scheduled on machines that can be rented for a fixed length period of T. The objective is to minimize the number of machine rents. This problem generalizes the Online Machine Minimization problem where machines can be rented for an infinite period, and both problems have an asymptotically optimal competitive ratio of O(\log(p_{\max}/p_{\min})) for general processing times, where p_{\max} and p_{\min} are the maximum and minimum processing times respectively. However, for small values of p_{\max}/p_{\min}, a better competitive ratio can be achieved by assuming unit-size jobs. Under this assumption, Devanur et al. (2014) gave an optimal e-competitive algorithm for Online Machine Minimization, and Chen and Zhang (2022) gave a (3e+7)\approx 15.16-competitive algorithm for Online Rent Minimization. In this paper, we significantly improve the competitive ratio of the Online Rent Minimization problem under unit size to 6, by using a clean oracle-based online algorithm framework.

  • Weighted EF1 Allocations for Indivisible Chores

    Shengwei Zhou (University of Macau)

    Abstract

    We study how to fairly allocate a set of indivisible chores to a group of agents, where each agent has a non-negative weight that represents its obligation for undertaking the chores. We consider the fairness notion of weighted envy-freeness up to one item (WEF1) and propose an efficient picking sequence algorithm for computing WEF1 allocations. Our analysis is based on a natural and powerful continuous interpretation for the picking sequence algorithms in the weighted setting, which might be of independent interest. Using this interpretation, we establish the necessary and sufficient conditions under which picking sequence algorithms can guarantee other fairness notions in the weighted setting. We also study the existence of fair and efficient allocations and propose efficient algorithms for the computation of WEF1 and PO allocations for the bi-valued instances. Our result generalizes that of Garg et al. (AAAI 2022) and Ebadian et al. (AAMAS 2022) to the weighted setting. Our work also studies the price of fairness for WEF1, and the implications of WEF1 to other fairness notions.

Female Forum

  • Coordinated Dynamic Bidding in Repeated Second-Price Auctions with Budgets

    Yurong Chen (Peking University)

    Abstract

    In online ad markets, a rising number of advertisers are employing bidding agencies to participate in ad auctions. These agencies are specialized in designing online algorithms and bidding on behalf of their clients. Typically, an agency usually has information on multiple advertisers, so she can potentially coordinate bids to help her clients achieve higher utilities than those under independent bidding. In this paper, we study coordinated online bidding algorithms in repeated second-price auctions with budgets. We propose algorithms that guarantee every client a higher utility than the best she can get under independent bidding. We show that these algorithms achieve maximal coalition welfare and discuss bidders' incentives to misreport their budgets, in symmetric cases. Our proofs combine the techniques of online learning and equilibrium analysis, overcoming the difficulty of competing with a multi-dimensional benchmark. The performance of our algorithms is further evaluated by experiments on both synthetic and real data. To the best of our knowledge, we are the first to consider bidder coordination in online repeated auctions with constraints.

  • Locally-iterative (\Delta+1)-Coloring in Sublinear (in \Delta) Rounds

    Xinyu Fu (Nanjing University)

    Abstract

    Distributed graph coloring is one of the most extensively studied problems in distributed computing. There is a canonical family of distributed graph coloring algorithms known as the locally-iterative coloring algorithms, first formalized in [Szegedy and Vishwanathan, STOC'93]. In such algorithms, every vertex iteratively updates its own color according to a predetermined function of the current coloring of its local neighborhood. Due to the simplicity and naturalness of its framework, locally-iterative coloring algorithms are of great significance both in theory and practice. In this talk, we will first focus on several classical locally-iterative coloring algorithms, such as standard color reduction algorithm and Linial's algorithm. The second part of the talk, we focus on our locally-iterative (\Delta+1)-coloring algorithm with runtime O(\Delta^{3/4}\log\Delta)+\log^*{n}, using messages of size O(\log{n}). This is the first locally-iterative (\Delta+1)-coloring algorithm with sublinear-in-\Delta running time, and answers the main open question raised in a recent breakthrough [Barenboim, Elkin, and Goldberg, JACM~'21].

  • Stability of Decentralized Queueing Network ——Beyond Bipartite Cases

    Qun Hu (Shanghai University of Finance and Economics)

    Abstract

    An important topic in algorithmic game theory is the study of the performance of systems operated by decentralized, self-interested agents, in comparison with systems under centralized control. One such system modeling queueing systems was proposed by Gaitonde and Tardos (EC 20, 21), who studied the stability of the system when there is a centralized authority, when queues use no regret strategies, and when queues strategize ``patiently'', respectively. Unlike games studied in the classical Price of Anarchy literature, in these queueing systems the strategy and outcome of one round impact the utility of future rounds. In this talk, we generalize Gaitonde and Tardos's model from complete bipartite graphs to general bipartite graphs and DAGs. In general bipartite graphs, we obtain performance analysis comparable to that by Gaitonde and Tardos, although ours reveals a connection to the dual constraints in the centralized stability conditions, and suggest that stability conditions in incomplete bipartite graphs are generally easier to satisfy. For DAGs, we show that a straightforward generalization of the utility function considered by Gaitonde and Tardos may lead to infinite gaps between centralized and decentralized systems when the queues use no regret strategies. We provide both a novel utility for queues and a service priority rule, which we show restore the stability conditions when queues use no regret strategies. Lastly, for incomplete bipartite graphs, we obtain stability conditions when queues strategize patiently, giving a bound slightly worse than that of Gaitonde and Tarods [EC21] for complete bipartite graphs.

  • Best-of-three-worlds Analysis for Linear Bandits with Follow-the-regularized-leader Algorithm

    Fang Kong (Shanghai Jiao Tong University)

    Abstract

    The linear bandit problem has been studied for many years in both stochastic and adversarial settings. Designing an algorithm that can optimize the environment without knowing the loss type attracts lots of interest. \citet{LeeLWZ021} propose an algorithm that actively detects the loss type and then switches between different algorithms specially designed for specific settings. However, such an approach requires meticulous designs to perform well in all environments. Follow-the-regularized-leader (FTRL) is another type of popular algorithm that can adapt to different environments. This algorithm is of simple design and the regret bounds are shown to be optimal in traditional multi-armed bandit problems compared with the detect-switch type. Designing an FTRL-type algorithm for linear bandits is an important question that has been open for a long time. In this paper, we prove that the FTRL algorithm with a negative entropy regularizer can achieve the best-of-three-world results for the linear bandit problem. Our regret bounds achieve the same or nearly the same order as the previous detect-switch type algorithm but with a much simpler algorithmic design.

CSIAM Forum

  • Research on Data Integrity Protection Technologies for Off-chain Storage in Blockchain

    Zhong Chen (Peking University)

    Abstract

    As a distributed ledger, blockchain can provide low-risk and low-cost data security services with its excellent features. In practical applications, blockchain not only needs to store text, but also audio and video data at TB or even PB levels. It is promising to utilize the off-chain storage technology to migrate the user data from on-chain to off-chain storage. However, these off-chain data may be corrupted due to equipment failures, hacker attacks, etc., posing threats to data integrity. Existing data integrity auditing schemes based on hash functions require auditors to retrieve all data content, leading to a significant amount of data transmitted. The provable data possession (PDP) scheme offers a promising solution, as the amount of transmitted data is independent of the audited data volume. Nonetheless, current PDP schemes encounter challenges including a single point of failure, high storage costs, and a limited range of applications. With the objective of tackling the aforementioned challenges, we have developed a novel provable data possession scheme that eliminates the need for maintaining user keys. This design removes the key management center and mitigates the risk of a single point of failure. We also design an efficient PDP scheme that supports broken data localization. The scheme uses existing file authenticators stored in the cloud to identify broken data blocks, eliminating the need to allocate extra space to store data like hash values, thus reducing storage costs. Existing PDP schemes are primarily designed for data integrity audits. We introduce a groundbreaking PDP scheme that allows for concurrent verification of both semantic consistency and data integrity. The resulting verification outcome serves as evidence for subsequent data recovery and accountability. This innovative implementation considerably expands the range of applications for PDP schemes.

  • Allocative Inefficiencies in Public Blockchains, MEV, and Private Transactions

    Ye Wang (University of Macau)

    Abstract

    The transparent observability of pending transactions in public distributed ledgers leads to suboptimal allocation of blockspace. We show that augmenting public distributed ledgers with private transaction submission pools can reduce allocative inefficiencies and raise welfare. Full private pool adoption, while socially desirable, may not be attainable in equilibrium if it limits validators' rent extraction from frontrunning opportunities. To align validator incentives with the social optimum, we propose a pay-for-private transaction flow design where frontrunnable users subsidize validators for the loss of maximal extractable due to full adoption of private pools.