Previous Speakers 2007-2008

Twelfth lecture:

The Economics of Internet Search
Hal R. Varian, Google Chief Economist

Date: Friday, May 16, 2008
Time: 1:00 pm - 2:30pm
Location: Calit2 Auditorium, Atkinson Hall, UCSD


Click here for bio and abstract

Bio:
Hal R. Varian, Chief Economist at Google, started in May 2002 as a consultant and has been involved in many aspects of the company, including auction design, econometric, finance, corporate strategy and public policy. He also holds academic appointments at the University of California, Berkeley in the business, economics, and information management departments. Dr. Varian is a fellow of the Guggenheim Foundation, the Econometric Society, and the American Academy of Arts and Sciences. He was Co-Editor of the American Economic Review from 1987-1990 and holds honorary doctorates from the University of Oulu, Finland and the University of Karlsruhe, Germany. He is the author of two major economics textbooks which have been translated into 22 languages. He is the co-author of a bestselling book on business strategy, Information Rules: A Strategic Guide to the Network Economy and wrote a monthly column for the New York Times from 2000 to 2007. He received his BS degree from MIT in 1969 and his MA in mathematics and Ph.D. in economics from UC Berkeley in 1973. He has also taught at MIT, Stanford, Oxford, Michigan and other universities around the world.

Abstract:
The last seminar in this year's Calit2-sponsored series, "Behavioral, Social, and Computer Sciences Seminar Series," is Friday, May 23, 2008. This series promotes the development of theory and experiments that apply the computational world view to the theories and methodologies of the social and behavioral sciences. This lecture provides an introduction to the economics of Internet search engines. After a brief review of the historical development of the technology and the industry, I describe some of the economic features of the auction system used for displaying ads. It turns out that some relatively simple economic models provide significant insight into the operation of these auctions. In particular, the classical theory of two-sided matching markets turns out to be very useful in this context.

Link to paper:
http://people.ischool.berkeley.edu/~hal/Papers/2007/costa-lecture.pdf

Speaker's website:
http://people.ischool.berkeley.edu/~hal/


Eleventh lecture:

Foundations for Bayesian Updating
See webcast

Leeat Yariv Associate Professor, Division of Humanities and Social Sciences, Caltech

Date: Friday, May 16, 2008
Time: 10:30am - 12:00pm
Location: Computer Science & Engineering (CSE) Building, Room 1202


Click here for bio and abstract

Bio:
Leeat Yariv is an Associate Professor of Economics at Caltech. Her research interests are in game theory, political economy, psychology and economics. Yariv received her Ph.D. in economics from Harvard University in 2001, and received undergraduate degrees in mathematics and physics from Tel-Aviv University. In 2001 she received a postdoctoral fellowship tat Yale Univeristy, then joined the UCLA department of economics as an assistant professor. In 2005, Yariv left UCLA to join Caltech. While most of her work thus far has been theoretical in nature, Yariv recently started utilizing experimental methodology and is pursuing a new research agenda on political science experimentation, including the development of jVote for running large web-based voting experiments. In addition, and also jointly with Caltech professor Jacob Goeree, Yariv instigated Caltech's Mobile Experimental Laboratory (CAMEL) to conduct experiments with non-standard subject pools, including high-school students, homeless in the LA area, and a variety of professionals.

Abstract:

We provide a simple characterization of updating rules that can be rationalized as Bayesian. Namely, we consider a general setting in which an agent observes finite sequences of signals and reports probabilistic predictions on the underlying state of  the world. We study when such predictions are consistent with Bayesian updating, i.e., when does there exist some theory about the signal generation process that would be consistent with the agent behaving as a Bayesian updater. We show that the following condition is necessary and sucient for the agent to appear Bayesian: the probability distribution that represents the agent's belief after observing any finite sequence of signals is a convex combination of the probability distributions that represent her beliefs conditional on observing sequences of signals that are the possible continuations of the original sequence. This condition cannot be derived from the ones the literature has identied when confounding the problem with maximization of expected utility. Additional restrictions are identied for all histories of signals to be given positive probability under the identied information generation process, and for the agent's theory to entail conditional independence or exchangeability of signals. 

Link to paper:
http://www.hss.caltech.edu/~lyariv/Papers/Bayesian.pdf

Speaker's website:
http://www.hss.caltech.edu/~lyariv/


Tenth lecture:

Computational Models of Human Learning
Charles Kemp, Department of Psychology, Carnegie Mellon University, PA

Date: Friday, April 4, 2008
Time: 4:00 p.m. - 5:30 p.m.
Location: Computer Science & Engineering (CSE) Building, Room 1202


Click here for bio and abstract

Bio:
Charles Kemp is an Assistant Professor in the Department of Psychology at Carnegie Mellon University. His work focuses on learning and cognitive development. He is particularly interested in high-level cognition, including problems such as categorization, commonsense reasoning and language acquisition. Kemp approaches these problems by building computational models and testing them against behavioral data. Many of these models rely on probabilistic inference and draw on recent ideas from statistics, machine learning, and artificial intelligence.

Abstract:

Despite recent advances in machine learning, the human child is still the best learning system on the planet. I will discuss three principles that help to explain how human knowledge is acquired: learning can be guided by rich prior knowledge, learning can take place at multiple levels of abstraction, and learning can allow structured representations to be acquired. The Bayesian approach to learning can capture all three principles, and Bayesian models have been used to explore many aspects of cognition, including word learning, categorization, and causal reasoning. I will show how some of these models address problems that are routinely solved by humans, but are difficult for traditional learning theory to handle.

Speaker's website:
http://www.psy.cmu.edu/%7Eckemp/



Ninth lecture:

Behavioral Games on Networks
Michael Kearns, Department of Computer Information and Science, University of Pennsylvania, PA

Date: Friday, March 7, 2008
Time: 2:00pm - 3:30pm
Location: Computer Science & Engineering Building (CSE) Building, Room 1202


Click here for bio and abstract

Bio:
Michael Kearns has been a professor in the Department of Computer Information and Science at the University of Pennsylvania since 2002, where he holds the National Center Chair in Resource Management and Technology. He also has a secondary appointment in the Operations and Information Management (OPIM) department of the Wharton School, and until July 2006 was the co-director of Penn's interdisciplinary Institute of Research in Cognitive Science. His primary research interests are in machine learning, probabilistic artificial intelligence, computational game theory and economics, and computational finance. Kearns often blends problems from these areas with methods from theoretical computer science and related disciplines. While the majority of his work is mathematical in nature, Kearns has also participated in a variety of systems and experimental work, including spoken dialogue systems, software agents, and human-subject experiments in strategic interaction.

Abstract:
We have been conducting behavioral experiments in which human subjects attempt to solve challenging graph-theoretic optimization problems through only local interactions and incentives. The primary goal is to shed light on the relationships between network structure and the behavioral and computational difficulty of different problem types. To date, we have conducted experiments in which subjects are incentivized to solve problems of graph coloring, consensus, independent set, and an exchange economy game. I will report on thought-provoking findings at both the collective and individual behavioral levels, and contrast them with theories from theoretical computer science, sociology, and economics. This talk discusses joint work with Stephen Judd, Sid Suri, and Nick Montfort

Links to papers:
http://www.cis.upenn.edu/~mkearns/papers/ScienceFinal.pdf http://www.cis.upenn.edu/~mkearns/papers/behnwt.pdf

Speaker's website:
http://www.cis.upenn.edu/%7Emkearns/



Eighth lecture:

Dynamic Mechanism Design
Ilya Segal, Department of Economics, Stanford University, CA

Date: Friday, February 15, 2008
Time: 2:00pm - 3:30pm
Location: Calit2 Atkinson Hall Auditorium


Click here for bio and abstract

Bio:
Professor Ilya Segal is a Roy and Betty Anderson Professor in the Humanities and Sciences at Stanford University. He has taught in the university's Department of Economics since 2002. Segal is a specialist in contract theory and has developed models of transactions that take place in complex situations under conditions of uncertainty. He has shown, for example, why central governments of some countries wind up subsidizing failing firms. Firms are unable to commit to not renegotiate agreements, he argues, and this gives them an incentive to underinvest in productive assets that might reduce their subsidies. His work also explains why contracts to cover complex situations are often relatively incomplete even when complete contracts could have been written at a low cost. His research interests lie in the areas of Microeconomic theory, contract theory, information economics, industrial organization. He received his Ph.D. Harvard University; M.S. Moscow Institute of Physics and Technology (Applied Mathematics).

Abstract:
I will consider the design of efficient and profit-maximizing Bayesian incentive-compatible mechanisms for general dynamic environments with private information. In the environment, agents observe a sequence of private signals over a number of periods. In each period, the agents report their private signals and choose public (contractible) and private actions based on the reports. The probability distribution over future signals may depend on both past signals and past decisions. The general framework covers a broad class of long-term contracts and mechanisms including advance purchase contracts, repeated auctions, or life-time taxation mechanisms, allowing for serial correlation of agents' types, investments in agents' value or information, learning-by-doing, and habit formation. First I construct an efficient incentive-compatible mechanism, under the assumption of Private Values (each agent's payoff is determined by his own observations). Then I show that budget can be balanced in each period under the assumption of Independent Types (the distribution of each agent's private signals does not depend on the other agents' private information, except through public decisions). I provide conditions under which participation constraints can be satisfied, and the mechanism can be made self-enforcing, provided that the time horizon is infinite and players are sufficiently patient. Next, assuming Independent Types and continuous signal spaces, I derive a "Revenue Equivalence" result showing that when agents' private signals are drawn from continuous intervals, any two dynamic mechanisms that implement the same allocation rule must yield the same expected payoffs to the agents and the same expected revenue to the auctioneer regardless of the transfers used by the mechanisms and of the information disclosed to the agents in the course of the mechanism. I derive a formula that expresses the auctioneer's present expected profits as the present expected value of a dynamic "virtual surplus," extending Myerson's derivation for static auctions. I characterize allocation rules that maximize present expected virtual surplus, and identify the inefficiencies introduced by the profit-maximizing auctioneer. I also provide sufficient conditions for such allocation rules to be implementable in an incentive-compatible mechanism. As applications, I derive a profit-maximizing sequence of auctions when the bidders' types follow autoregressive process, or when bidders learn about their value by consuming the object.

Speaker's website:
http://www.stanford.edu/%7Eisegal/
Link to background paper:
http://www.stanford.edu/~isegal/agv.pdf
http://www.stanford.edu/~isegal/req.pdf


Seventh Lecture:

Gut Feelings: The Intelligence of the Unconscious
Gerd Gigerenzer, Director, Max Planck Institute for Human Development, Berlin, Germany

Date: Friday, February 8, 2008
Time: 2:00pm - 3:30pm
Location: Calit2 Auditorium, First Floor, Atkinson Hall


Click here for bio and abstract

Bio:
Gerd Gigerenzer is Director of the Center for Adaptive Behavior and Cognition at the Max Planck Institute for Human Development in Berlin, Germany and former Professor of Psychology at the University of Chicago. He won the AAAS Prize for the best article in the behavioral sciences. He is the author of Calculated Risks: How To Know When Numbers Deceive You, the German translation of which won the Scientific Book of the Year Prize in 2002. He has also published two academic books on heuristics, Simple Heuristics That Make Us Smart (with Peter Todd & The ABC Research Group) and Bounded Rationality: The Adaptive Toolbox with Reinhard Selten, a Nobel laureate in economics.

Abstract:
We think of intelligence as a deliberate, conscious activity guided by the laws of logic. Yet much of our mental life is unconscious, based on processes alien to logic: gut feelings, or intuitions. I argue that intuition is more than impulse and caprice; it has its own rationale. This can be described by fast and frugal heuristics, which exploit evolved abilities in our brain. Heuristics ignore information and try to focus on the few important reasons. More information, more time, even more thinking, are not always better, and less can be more.

Speaker's website:
http://mpibweb.mpib-berlin.mpg.de/curriculum_vitae/lang/en/id_name/gigerenzer/index.mpi

** Schedule:

Talk Time: 2:00pm - 3:30pm
Talk Location: Calit2 Auditorium, First Floor, Atkinson Hall

Sixth Lecture:

Explorations in Computational Mechanism Design
David C. Parkes, Computer Science, Harvard University, MA

Date: Friday, January 11, 2008
Time: 2:00pm - 3:30 pm
Location: Computer Science & Engineering (CSE) Building, Room 1202


Click here for bio and abstract

Abstract:
Professor Parkes will introduce the area of computational mechanism design (CMD), which seeks to understand how to design games to induce desirable outcomes in multi-agent systems despite private information, self-interest and limited computational resources. CMD finds application in many settings, from allocating wireless spectrum and airport landing slots, to internet advertising, to expressive sourcing in the supply chain, to allocating shared computational resources. In meeting the demands for CMD in these rich domains, we need to bridge from the classic theory of economic mechanism design to the practice of deployable, scalable mechanisms. He will first provide a brief overview of the theory of economic mechanism design, and explain the idea of strategyproof mechanisms. In moving to CMD, Parkes will highlight his contributions to the design of dynamic coordination mechanisms, relevant in settings with agent arrivals and departures and also with agents that face dynamic local problems. The family of Groves mechanisms can be usefully generalized to these domains, albeit with an interesting change in the solution concept, and coupled with sample-based planning algorithms. This opens up a new frontier of applications-- and challenges --for CMD. He also outlines a complementary direction in "computational ironing", which embraces heuristic, scalable algorithms and supports learning by the mechanism over time. Parkes will close with some brief comments about his related work in combinatorial markets, and suggest directions in incentive-compatible social computing.


Bio:
David C. Parkes is the John L. Loeb Associate Professor of the Natural Sciences and Associate Professor of Computer Science at Harvard University. He received his Ph.D. degree in Computer and Information Science from the University of Pennsylvania in 2001, and an M.Eng. (First class) in Engineering and Computing Science from Oxford University in 1995. He was awarded the prestigious NSF CAREER Award in 2002, an IBM Faculty Partnership Award in 2002 and 2003, and the Alfred P. Sloan Fellowship in 2005. Parkes has published extensively on topics related to electronic markets, computational mechanism design, auction theory, and multi-agent systems. He is an editor of Games and Economic Behavior, serves on the editorial board of the Journal of Artificial Intelligence Research and the Electronic Commerce Research Journal. Parkes served as the co program-chair of the Eighth ACM Conference on Electronic Commerce (EC'07) and is the co program-chair of the Seventh International Conference on Autonomous Agents and Multiagent Systems (AAMAS'07).

Speaker's websites:
http://www.eecs.harvard.edu/%7Eparkes/

Link to background paper:
http://www.eecs.harvard.edu/%7Eparkes/

** Schedule:

Talk Time: 10:30am - 12:00pm (refreshments served before talk)
Talk Location: Calit2 Atkinson Hall Auditorium

Fifth Lecture:

How to Read 100 Million Blogs (& Classify Deaths Without Physicians)
Gary King, Department of Government, Harvard University, MA

Date: Friday, January 4, 2008
Time: 10:30am - 12:00 pm
Location: Computer Science Engineering Building (CSE), Room 1202


Click here for bio and abstract

Abstract:
The talk will be about two papers that cover different topics in unrelated fields, but based on some of the same methods: "Extracting Systematic Social Science Meaning from Text" by Gary King and Daniel Hopkins & "Verbal Autopsy Methods with Multiple Causes of Death" by Gary King and Ying Lu.

"Extracting Systematic Social Science Meaning from Text"
We develop two methods of automated content analysis that give approximately unbiased estimates of quantities of theoretical interest to social scientists. With a small sample of documents hand coded into investigator-chosen categories, our methods can give accurate estimates of the proportion of text documents in each category in a larger population. Existing methods successful at maximizing the percent of documents correctly classified allow for the possibility of substantial estimation bias in the category proportions of interest. Our first approach corrects this bias for any existing classifier, with no additional assumptions. Our second method estimates the proportions without the intermediate step of individual document classification, and thereby greatly reduces the required assumptions. For both methods, we correct statistically, apparently for the first time, for the far less-than-perfect levels of inter-coder reliability that typically characterize human attempts to classify documents, an approach that will normally outperform even population hand coding when that is feasible. We illustrate these methods by tracking the daily opinions of millions of people about candidates for the 2008 presidential nominations in online blogs, data we introduce and make available with this article, and through evaluations in available corpora from other areas, including movie reviews and university web sites. We also offer easy-to-use software that implements all methods described.

"Verbal Autopsy Methods with Multiple Causes of Death"
Verbal autopsy procedures are widely used for estimating cause-specific mortality in areas without medical death certification. Data on symptoms reported by caregivers along with the cause of death are collected from a
medical facility, and the cause-of-death distribution is estimated in the population where only symptom data are available. Current approaches analyze only one cause at a time, involve assumptions judged difficult or impossible to satisfy, and require expensive, time consuming, or unreliable physician reviews, expert algorithms, or parametric statistical models. By generalizing current approaches to analyze multiple causes, we show how most of the difficult assumptions underlying existing methods can be dropped. These generalizations also make physician review, expert algorithms, and parametric statistical assumptions unnecessary. With theoretical results, and empirical analyses in data from China and Tanzania, we illustrate the accuracy of this approach. While no method of analyzing verbal autopsy data, including the more computationally intensive approach offered here, can give accurate estimates in all circumstances, the procedure offered is conceptually simpler, less expensive, more general, as or more replicable, and easier to use in practice than existing approaches. We also show how our focus on estimating aggregate proportions, which are the quantities of primary interest in verbal autopsy studies, may also greatly reduce the assumptions necessary, and thus improve the performance of, many individual classifiers in this and other areas. As a companion to this paper, we also offer easy-to-use software that implements the methods discussed herein.


Bio:
Gary King is the David Florence Professor of Government in the Department of Government at Harvard University. He also serves as Director of the Institute for Quantitative Social Science. King and his research group, comprised of research associates from both Harvard and universities and institutes around the country, develop statistical and other methods for, and conduct diverse applications in, many areas of social science research, focusing on innovations that span the range from statistical theory to practical application. King was listed as the most cited political scientist of his cohort; among the group of "political scientists who have made the most important theoretical contributions" to the discipline "from its beginnings in the late-19th century to the present"; and on the University of Southern California's Information Sciences Institute (ISI)'s list of the most highly cited researchers across the social sciences. King's research has been supported by the National Science Foundation, the Centers for Disease Control and Prevention, the World Health Organization, the National Institute of Aging, the Global Forum for Health Research, and centers, corporations, foundations, and other federal agencies. He received his Ph.D. from the University of Wisconsin-Madison in 1984.

Speaker's websites:
http://gking.harvard.edu/

Link to background paper:
http://gking.harvard.edu/files/abs/words-abs.shtml

http://gking.harvard.edu/files/abs/vamc-abs.shtml

** Schedule:

Talk Time: 10:30am - 12:00pm (refreshments served)
Talk Location: Computer Science & Engineering (CSE) Building 1st floor, Room 1202

Fourth Lecture:

The Algorithmic Lens: How the Computational Perspective is Transforming the Sciences
Christos Papadimitriou, Electrical Engineering and Computer Sciences Department, UC Berkeley, CA

Date: Friday, December 14, 2007
Time: 1:00pm - 2:30 pm
Location: Calit2 Auditorium


Click here for bio and abstract

Abstract:
The Algorithmic Lens: How the Computational Perspective is Transforming the Sciences

Computational research transforms the sciences (physical, mathematical, life or social) not just by empowering them analytically, but mainly by providing a novel and powerful perspective which often leads to unforeseen insights. Examples abound: quantum computation provides the right forum for questioning and testing some of the most basic tenets of quantum physics, while statistical mechanics has found in the efficiency of randomized algorithms a powerful metaphor for phase transitions. In mathematics, the P vs. NP problem has joined the list of the most profound and consequential problems, and in economics considerations of computational complexity revise predictions of economic behavior and affect the design of economic mechanisms such as auctions. Finally, in biology some of the most fundamental problems, such as understanding the brain and evolution, can be productively recast in computational terms.


Bio:
Christos Papadimitriou is a professor in the Computer Science Department at the University of California, Berkeley. He studied at the National Technical University of Athens (BS in Electrical Engineering, 1972) and at Princeton University (MS in Electrical Engineering, 1974 and PhD in Electrical Engineering and Computer Science, 1976. His interests lie in the theory of algorithms and complexity, and its applications to databases, optimization, AI, networks, and game theory. Papadimitriou is the author of many books, including Computational Complexity.

Speaker's websites:
http://www.cs.berkeley.edu/%7Echristos/

Link to background paper:
[N/A]


** Schedule:

Talk Time: 1:00pm - 2:30pm
Talk Location:Calit2 Auditorium light refreshments served before talk

Third Lecture:

Cooperation by Evolutionary Feedback Selection in Public Good Experiments
Didier Sornette, Department of Earth and Space Sciences, University of California, Los Angeles, CA

Date: Wednesday, December 5th, 2007
Time: 11:00am - 12:30 pm
Location: CSE 1202


Click here for bio and abstract

Abstract:
Cooperation by Evolutionary Feedback Selection in Public Good Experiments

Strong reciprocity is a fundamental human characteristic associated with our extraordinary sociality and cooperation. Laboratory experiments on social dilemma games and many field studies have quantified well-defined levels of cooperation and propensity to punish/reward. The level of cooperation is observed to be strongly dependent on the availability of punishments and/or rewards. Here, we suggest that the propensity for altruistic punishment and reward is an emergent property that has co-evolved with cooperation by providing an efficient feedback mechanism through both biological and cultural interactions. By favoring high survival probability and large individual gains, the propensity for altruistic punishment and rewards reconciles self- and group interests.

We show that a simple cost/benefit analysis at the level of a single agent, who anticipates the action of her fellows, determines an optimal level of altruistic punishment, which explains quantitatively experimental results on the third-party punishment game, the ultimatum game and altruistic punishment games. We also report numerical simulations of an evolutionary agent-based model of repeated agent interactions with feedback-by-punishments, which confirms that the propensity to punish is a robust emergent property selected by the evolutionary rules of the model.

Bio:
Professor Didier Sornette is on the Chair of Entrepreneurial Risks Department of Management, Technology and Economics (D-MTEC) at ETH Zurich (or Swiss Federal Institute of Technology). He also serves as a Visiting Professor in the Institute of Geophysics and Planetary Physics and Department of Earth and Space Sciences at UCLA. His fields of research interest include the prediction of crises and extreme events in complex systems (with applications to finance, economics, marketing, earthquakes, rupture, biology, and medicine). Professor Sornette is the author and coauthor of more than 350 research papers in refereed international journals and more than 120 papers in books and conference proceedings. He is a graduate of Ecole Normale Supérieure (ENS Ulm, Paris) in Physical Sciences and received his PhD. from the University of Nice in 1985.

Speaker's websites:
http://www.er.ethz.ch/

Link to background paper:
[Click Here]


** Schedule:

Talk Time: 11:00am - 12:30pm
Talk Location: CSE 1202 Lunch provided

Second Lecture:

Cognition in Context: Understanding "biases" in reasoning, learning, and decision making
Craig McKenzie, Rady School of Management and Department of Psychology, UC-San Diego, CA

Date: Friday, November 30th, 2007
Time: 12:00 pm
Location: CSE 1202


Click here for bio and abstract

Abstract:
Cognitive heuristics, or mental rules of thumb, are often invoked as explanations of apparent errors or biases in human reasoning, learning, and decision making. I will argue that some well known "biases" in these areas are actually the result of people using rational (Bayesian) principles and being sensitive to the statistical structure of the environment.

Bio:
Craig R. M. McKenzie is a professor of management in the Rady School and also a professor in the department of psychology at UC San Diego. He has been a faculty member at UC San Diego since receiving his Ph.D. in psychology in 1994 from the University of Chicago.

Professor McKenzie's research and teaching revolve around how people make decisions in the face of uncertainty, as well as how to help people make better decisions. Recent research projects include examining why merely rephrasing (or reframing) the available options can change people's decisions, how expertise benefits prediction, and how to get people to save more for retirement.

His research has been funded by the National Science Foundation (NSF) since 1996, and he has won research awards from NSF, the Operations Research Society of America, and the Society for Judgment and Decision Making. He currently serves on the editorial boards of several scholarly journals.

Speaker's websites:
http://management.ucsd.edu/faculty/directory/mckenzie/
http://www-psy.ucsd.edu/%7Emckenzie/

Link to background paper:
http://www-psy.ucsd.edu/%7Emckenzie/McKenzieJDMChapter2005.pdf


** Schedule:

Breakfast Time: 10:00am - 11:00am
Breakfast Location: Social Science Building (SSB) 107

Talk Time: 12:00pm - 1:30pm
Talk Location: CSE 1202 Lunch provided

First Lecture:

Learning and the Wisdom of Crowds in Networks
Matthew O. Jackson, Economics, Stanford University, CA

Date: Friday, November 2, 2007
Time: 11-12:00 pm (**see below for full day's schedule)
Location: Calit2's Atkinson Hall Auditorium


Click here for bio and abstract

Abstract:
We study learning and influence among agents who are connected in a network and update their beliefs by repeatedly taking weighted averages of their neighbors' opinions. A focus is on conditions under which beliefs of all agents in large societies converge to an accurate estimate of an unknown state. In addition, we show how each agent's influence on the eventual convergence point of societal beliefs changes with updating weights. We also discuss how speed of convergence depends on how segregated the society is, and how network structure is changing with technological changes.

Prof. Jackson's website: http://www.stanford.edu/~jacksonm/
Background paper: http://www.stanford.edu/~jacksonm/naivelearning.pdf

** Full day's schedule:

10-11 am Graduate Student Breakfast with speaker (Atkinson Hall, First Floor Lobby)
11-12 pm Talk (Atkinson Hall Auditorium)
12-1 pm Lunch
1-5 pm Meetings with speaker for interested faculty & graduate students (Atkinson Hall, rm TBA)

For more information, please contact
Alex Wong: shw001@ucsd.edu.

 

Calit2, UC, San Diego