Ciclo de Palestras – Segundo Semestre de 2017

As palestras ocorrerem no Auditório do Laboratório de Sistemas Estocásticos (LSE), sala I-044b, as 15:30 h, a menos de algumas exceções devidamente indicadas.

Lista completa (palestras previstas para datas futuras podem sofrer alterações)

Atypical observations, called outliers, are common in many real datasets. Classical procedures do not contemplate their existence and therefore their application may lead to wrong conclusions. For instance, the sample mean or the least-squares fit of a regression model, can be very adversely influenced by outliers, even by a single one. Robust methods arise to cope with these atypical observations, mitigating their impact in the final analysis. The median is, probably, the most popular example of a robust procedure to summarize a univariate dataset. In this talk we discuss on the use of the median as a systematic tool to construct robust procedures. Robust double protected estimators of the quantiles of the distribution of a random response in a missing at random setting are discussed. Also prediction intervals in a sufficient dimension reduction context are presented.
In order to classify or divide in clusters a large set of data, statistical procedures sort the data by the size of some characteristic parameters. The classical mean/variance paradigm fits nicely into a setup with Gaussian distributions, but the results can be sensitive on small deviations from the assumed model. In order to improve robustness, one can use alternative measures of centrality, like medians in scalar data. In a multidimensional setup we can start with so called statistical depth functions and define a median set as a set of deepest points. The talk is focused on the best known and the most popular Tukey’s depth (J. W. Tukey). Computation of depth is a hard problem, and although there exist exact algorithms, they do not work in really high dimensions. We will present a new approximate algorithm that was used in solving a real problem with acoustic signals (joint work with Milica Bogićević, doctoral student of Applied Mathematics at University of Belgrade, Serbia), and can be used in the Big Data context, with very large data sets and high dimensions. Incidentally, J. W. Tukey is one of a few pioneers in what is now meant by Data Science, and some interesting details from his life will be mentioned in the talk.
Real-life problems are often modeled in an uncertain setting, to better reflect unknown phenomena, specific to the application. For problems when decisions are taken prior to observing the realization of underlying random events, probabilistic constraints are an important modelling tool if reliability is a concern, in particular because they give a physical interpretation to risk.

Probabilistic constraints arise in many real-life problems, for example electricity network expansion, mineral blending, chemical engineering. Typically, these constraints are used when an inequality system involves random parameters considered as critical for the decision making process.

A key concept to numerically dealing with probabilistic constraints is that of p-efficient points. By adopting a dual point of view, we develop a solution framework that includes and extends various existing formulations. The unifying approach is built on the basis of a recent generation of bundle methods called with on-demand accuracy, characterized by its versatility and flexibility.

The methodology is illustrated on the optimal management of a chain of cascaded reservoirs coupled with turbines producing electrical energy.

Joint work with W. van Ackooij, V. Berge and W. de Oliveira

 

There has been considerable advances in understanding the properties of the LASSO procedure in sparse high-dimensional models. Most of the work is, however, limited to the independent and identically distributed setting whereas most time series extensions consider independent and/or Gaussian innovations. Kock and Callot (2016, Journal of Econometrics) derived equation-wise oracle inequalities for Gaussian vector autoregressive models. We extend their work to a broader set of innovation processes, by assuming that the error process is non-Gaussian and conditionally heteroskedastic. It is of particular interest for financial risk modeling and covers several multivariate-GARCH specifications, such as the BEKK model, and other factor stochastic volatility specifications. We apply this method to model and forecast large panels of daily realized volatilities.
We consider a new, flexible and easy-to-implement method to estimate causal effects of an intervention on a single treated unit and when a control group is not readily available. We propose a two-step methodology where in the first stage a counterfactual is estimated from a large-dimensional set of variables from a pool of untreated units using shrinkage methods, such as the Least Absolute Shrinkage Operator (LASSO). In the second stage, we estimate the average intervention effect on a vector of variables, which is consistent and asymptotically normal. Our results are valid uniformly over a wide class of probability laws. Furthermore, we show that these results still hold when the exact date of the intervention is unknown. Tests for multiple interventions and for contamination effects are also derived. By a simple transformation of the variables of interest, it is also possible to test for intervention effects on several moments (such as the mean or the variance) of the variables of interest. Existing methods in the literature usually test for intervention effects on a single variable (univarite case) and assume the time of the intervention to be known. In addition, high-dimensionality is frequently ignored and inference is either conducted under a set of more stringent hypotheses and/or by permutation tests. A Monte Carlo experiment evaluates the properties of the method in finite samples and compares it with other alternatives such as the differences-in-differences, factor and the synthetic control methods. In an application we evaluate the effects on inflation of an anti tax evasion program.
Fixation in finite populations: discrete and continuous views
Max Oliveira de Souza (UFF)

We will present two different viewpoints on fixation: In the first part of the talk, we identify a general class of evolutive processes which share many features with the classical Moran and Wright-Fisher (WF) processes—and include both of them. We also identify when a process in this class will have a strictly increasing fixation, and show that (WF) processes may have a decreasing fixation, contrary to the Moran processes. We also show that WF is universal from the point of view of fixation: given almost any fixation vector, there is at least one WF process that realises it. In the second part, we show how to construct continuous approximations of the fixation probability for birth-death processes that are valid beyond the weak-selection limit. Using this approximation we give continuous restatements of two classical concepts in the discrete setting: (i) the ESS$_N$ and (ii) risk dominant strategies. In particular, we obtain an asymptotic definition in the quasi-neutral regime of the celebrated 1/3 law. This is joint work with FACC Chalub.

Para alguns autores o século XX foi o fim das grandes certezas na História das Ciências, o fim da era da inocência. Na Economia, infelizmente, chegamos ao século XXI sem que as grandes certezas fossem minimamente abaladas, ainda que empiricamente não tenham qualquer sustentação. A Crise de 2008/2009 levou a um intenso debate e economistas renomados foram chamados ao Congresso Americano para explicar por que não foram capazes de prever a Grande Crise. As ciências sociais, em geral, carecem de um método comum e aceito por todos de refutação de teorias o que permite a coexistência de teorias antagônicas para explicar os fenômenos. No entanto, da mesma forma que os argumentos teóricos, as implicações derivadas para política econômica são igualmente antagônicas. Na palestra, será apresentado um referencial teórico e metodológico distinto do mainstream econômico e serão discutidas as diferentes implicações para política econômica, procurando trazer a análise para a atual situação da economia brasileiras e as propostas alternativa existentes.

Existe praticamente um consenso de que a proporção de suicídios cometidos com armas de fogo em relação ao suicídio total é a melhor maneira de medir indiretamente a prevalência dessas armas. No entanto, essa proxy não é precisa para localidades com baixa densidade populacional, tendo em conta que os suicídios são eventos raros. Para contornar esse problema, exploramos as características socioeconômicas das vítimas de suicídio, de modo a propor uma nova proxy para prevalência de arma de fogo. Avaliamos o nosso indicador com microdados de suicídio do Ministério da Saúde (MS), entre 2000 e 2010.
We discuss an “operational”‘ approach to testing convex composite hypotheses when the underlying distributions are heavy-tailed. It relies upon Euclidean separation of convex sets and can be seen as an extension of the approach to testing by convex optimization. In particular, we show how one can construct quasi-optimal testing procedures for families of distributions which are majorated, in a certain precise sense, by a sub-spherical symmetric one and study the relationship between tests based on Euclidean separation and “potential-based tests.” We apply the promoted methodology in the problem of sequential detection and illustrate its practical implementation in an application to sequential detection of changes in the input of a dynamic system. (Joint work with Anatoli Juditsky and Arkadi Nemirovski)
Inferência em população finita é uma área da estatística notadamente reconhecida por sua importância prática e sua aparente dicotomia com outras áreas da inferência estatística. Contudo a inferência em modelos de superpopulação possui objetivos bastante semelhantes à inferência estatística. Nesta palestra, os fundamentos teóricos da inferência em populações finitas baseada em modelos serão abordados e suas diferenças com relação à inferência usual serão ressaltadas. O papel do plano amostral na inferência de parâmetros de interesse também será discutido. Finalmente, apresentaremos algumas aplicações e desenvolvimentos recentes na área.
Brazil has a dual higher education market with the coexistence of public no-tuition institutions and private tuition-funded enterprises. About 3/4 of enrollments are in private higher education institutions (HEI). Still heavily regulated, since the market liberalization in 1997, private institutions can merge and acquire (M&A) other private HEI. We provide an overview of the recent growth of this sector and the significant role of mergers and aquisitions. We show that entry rates are small and fastest growing HEIs exploited M&A extensively. We evaluate the effects of mergers on employment. Using Difference-in-Difference analysis, we estimate smaller faculty size and a proportional to a reduction in enrollment after a merger, on average.

 

The main goal of ERICA – Study of Cardiovascular Risks in Adolescents, was to estimate the prevalence of cardiovascular risk factors in adolescents from 12 to 17 years who attended public and private schools in Brazilian cities with more than 100 thousands inhabitants. The study will also enable the investigation of several associations involving sociodemographic characteristics, cardiovascular risk factors and metabolic changes. Besides the questionnaire filled out by 85,000 adolescents, weight, height, waist circumference, and blood pressure were measured. Also, in subsample of approximately 42,000 adolescents studying in the morning term, blood was drawn for measuring lipids, glucose, insulin, and glycated hemoglobin.