坚果 官网

In late March, Brent Neiman and I posted a paper addressing a straightforward and suddenly pressing question: How Many Jobs Can be Done at Home?

Our aim was to describe what is feasible. Looking at pre-2020 practices, one would not have observed many high-school teachers working from home, but the global pandemic changed that. We used information on job characteristics to estimate which occupations could be performed entirely at home. Of course, this supply-side trait is only one important ingredient when thinking about jobs during the crisis. Demand-side considerations, such as designating a job as “essential”, are clearly important too. Couriers and messengers cannot work from home, but this industry has seen robust employment growth in recent months.

Enough time has passed that we are now learning who has been working at home during the pandemic. In a recent Economics Observatory column (What has coronavirus taught us about working from home?) and the latest version of our paper, Brent and I discuss some of this evidence. The initial evidence suggests that our classification of occupations is quite sensible.

In the United States, Alexander Bick, Adam Blandin, and Karel Mertens have been conducting a Real-Time Population Survey, an online survey of adults designed to mimic the Current Population Survey. Last week, they released a paper called “Work from Home After the COVID-19 Outbreak“. They report that 35 percent of their US respondents worked entirely from home in May 2020. Their Figure 1 shows that the share of respondents in an industry working from home in May is highly correlated with our estimate of the feasible share for that industry.

In Europe, the EU’s Eurofound launched an e-survey, Living, working and COVID-19, “to capture the most immediate changes during the pandemic and their impact.” Last month, they released first results on the impact of the pandemic on work and teleworking. As we report in our latest draft, there is a close correspondence between our country-level estimates of feasibility and what has occurred during the crisis.

Finally, while the latest update of the relevant paper hasn’t been posted online yet, in the video presentation below, Ed Glaeser reports that the industry-level variation in the share of jobs reported as being performed at home in a survey of small businesses is highly correlated with our industry-level feasible shares.

We classified the feasibility of working from home based on pre-pandemic conditions. Over time, I expect businesses to adapt their practices and leverage new tools to reallocate tasks and change the nature of jobs. A pressing question, which I briefly discussed at the end of a recent seminar presentation, is whether this temporary surge in remote work will have permanent consequences for the future of work.

In the short run, using pre-pandemic job characteristics to classify which jobs can be done at work has aligned well with who has actually been working at home during the pandemic.

坚果 官网

Economists studying spatial connections are excited about a growing body of increasingly fine spatial data. We’re no longer studying country- or city-level aggregates. For example, many folks now leverage satellite data, so that their unit of observation is a pixel, which can be as small as only 30 meters wide. See Donaldson and Storeygaard’s “The View from Above: Applications of Satellite Data in Economics“. Standard administrative data sources like the LEHD publish neighborhood-to-neighborhood commuting matrices. And now “digital exhaust” extracted from the web and smartphones offers a glimpse of behavior not even measured in traditional data sources. Dave Donaldson’s keynote address on “The benefits of new data for measuring the benefits of new transportation infrastructure” at the Urban Economics Association meetings in October highlighted a number of such exciting developments (ship-level port flows, ride-level taxi data, credit-card transactions, etc).

But finer and finer data are not a free lunch. Big datasets bring computational burdens, of course, but more importantly our theoretical tools need to keep up with the data we’re leveraging. Most models of the spatial distribution of economic activity assume that the number of people per place is reasonably large. For example, theoretical results describing space as continuous formally assume a “regular” geography so that every location has positive population. But the US isn’t regular, in that it has plenty of “empty” land: more than 80% of the US population lives on only 3% of its land area. Conventional estimation procedures aren’t necessarily designed for sparse data sets. It’s an open question how well these tools will do when applied to empirical settings that don’t quite satisfy their assumptions.

Felix Tintelnot and I examine one aspect of this challenge in our new paper, “Spatial Economics for Granular Settings“. We look at commuting flows, which are described by a gravity equation in ins加速器苹果. It turns out that the empirical settings we often study are granular: the number of decision-makers is small relative to the number of economic outcomes. For example, there are 4.6 million possible residence-workplace pairings in New York City, but only 2.5 million people who live and work in the city. Applying the law of large numbers may not work well when a model has more parameters than people.

Felix and I introduce a model of a “granular” spatial economy. “Granular” just means that we assume that there are a finite number of individuals rather than an uncountably infinite continuum. This distinction may seem minor, but it turns out that estimated parameters and counterfactual predictions are pretty sensitive to how one handles the granular features of the data. We contrast the conventional approach and granular approach by examining these models’ predictions for changes in commuting flows associated with tract-level employment booms in New York City. When we regress observed changes on predicted changes, our granular model does pretty well (slope about one, intercept about zero). The calibrated-shares approach (trade folks may know this as “exact hat algebra“), which perfectly fits the pre-event data, does not do very well. In more than half of the 78 employment-boom events, its predicted changes are negatively correlated with the observed changes in commuting flows.

The calibrated-shares procedure’s failure to perform well out of sample despite perfectly fitting the in-sample observations may not surprise those who have played around with machine learning. The fundamental concern with applying a continuum model to a granular setting can be illustrated by the finite-sample properties of the multinomial distribution. Suppose that a lottery allocates I independently-and-identically-distributed balls across N urns. An econometrician wants to infer the probability that any ball i is allocated to urn n from observed data. With infinite balls, the observed share of balls in urn n would reveal this probability. In a finite sample, the realized share may differ greatly from the underlying probability. The figure below depicts this ratio for one urn when I balls are distributed across 10 urns uniformly. A procedure that equates observed shares and modeled probabilities needs this ratio to be one. As the histograms reveal, the realized ratio can be far from one even when there are two orders of magnitude more balls than urns. Unfortunately, in many empirical settings in which spatial models are calibrated to match observed shares, the number of balls (commuters) and the number of urns (residence-workplace pairs) are roughly the same. The red histogram suggests that shares and probabilities will often differ substantially in these settings.

Balls and 10 urns: Histogram of realized share divided by underlying probability

Balls and 10 urns: Histogram of realized share divided by underlying probability

Granularity is also a reason for economists to be cautious about their counterfactual exercises. In a granular world, equilibrium outcomes depend in part of the idiosyncratic components of individuals’ choices. Thus, the confidence intervals reported for counterfactual outcomes ought to incorporate uncertainty due to granularity in addition to the usual statistical uncertainty that accompanies estimated parameter values.

See the paper for more details on the theoretical model, estimation procedure, and event-study results. We’re excited about the growing body of fine spatial data used to study economic outcomes for regions, cities, and neighborhoods. Our quantitative model is designed precisely for these applications.

Do customs duties compound non-tariff trade costs? Not in the US

For mathematical convenience, economists often assume iceberg trade costs when doing quantitative work. When tackling questions of trade policy, analysts must distinguish trade costs from import taxes. For the same reason that multiplicative iceberg trade costs are tractable, in these exercises it is easiest to model trade costs as the product of non-policy trade costs and ad valorem tariffs. For example, when studying NAFTA, Caliendo and Parro (2015) use the following formulation:

Caliendo and Parro (REStud, 2015), equation (3)

This assumption’s modeling convenience is obvious, but do tariff duties actually compound other trade costs? The answer depends on the importing country. Here’s Amy Porges, a trade attorney, answering the query on Quora:

Turbo Download Manager for Mac(下载加速器软件) V1.3.0 ...:2021-6-28 · Turbo Download Manager for Mac(下载加速器软件) V1.3.0 苹果电脑版,Turbo Download Manager Mac版是一款可以在苹果电脑MAC OS平台上使用的下载加速器软件,Turbo Download Manager Mac版下载管理器多线程传输提供支持,允许你暂停或恢复 ...

The US is one of the few countries where tariffs are applied on the basis of FOB value. Why? Article I, section 9 of the US Constitution provides that “No Preference shall be given by any Regulation of Commerce or Revenue to the Ports of one State over those of another”, and this has been interpreted as requiring that the net tariff must be the same at every port. If a widget is loaded in Hamburg and shipped to NY, its CIF price will be different than if it were shipped to New Orleans or San Francisco. However the FOB price of the widget shipped from Hamburg will be the same regardless of destination.

Here’s a instagram如何国内使用 from Neville Peterson LLP.

On page 460 of The Law and Policy of the World Trade Organization, we learn that Canada and Japan also take this approach.

Pursuant to Article 8.2, each Member is free either to include or to exclude from the customs value of imported goods: (1) the cost of transport to the port or place of importation; (2) loading, unloading, and handling charges associated with the transport to the port of place or importation; and (3) the cost of insurance. Note in this respect that most Members take the CIF price as the basis for determining the customs value, while Members such as the United States, Japan and Canada take the (lower) FOB price.

While multiplicative separability is a convenient modeling technique, in practice ad valorem tariff rates don’t multiply other trade costs for two of the three NAFTA members.

坚果 官网

Brent Neiman and I wrote a paper that tackles a simple question: “How Many Jobs Can be Done at Home?” The latest draft (April 16) is here. The full replication package is available on GitHub.

We estimate that 37% of US jobs, accounting for 46% of wages, can be performed entirely at home. Applying our occupational classifications to 85 other countries reveals that lower-income economies have a lower share of jobs that can be done at home.

浏览外网的免费加速器
This simple question is suddenly very important during this pandemic. See the ins加速器苹果 and Wall Street Journal for their reactions. I did an video interview with CEPR about our paper, which includes some thoughts about offshoring and the future of telecommuting. My comments to ins加速器苹果 appeared in a story titled “You’re Not Going Back to Normal Office Life for a Long, Long Time“.

Shift-share designs before Bartik (1991)

The phrase “Bartik (1991)” has become synonymous with the shift-share research designs employed by many economists to investigate a wide range of economic outcomes. As Baum-Snow and Ferreira (2015) describe, “one of the commonest uses of IV estimation in the urban and regional economics literature is to isolate sources of exogenous variation in local labor demand. The commonest instruments for doing so are attributed to Bartik (1991) and Blanchard and Katz (1992).”

The recent literature on the shift-share research design usually starts with Tim Bartik’s 1991 book, Who Benefits from State and Local Economic Development Policies?. Excluding citations of Roy (1951) and Jones (1971), Bartik (1991) is the oldest work cited in Adao, Kolesar, Morales (QJE 2019). The first sentence of Borusyak, Hull, and Jaravel’s abstract says “Many studies use shift-share (or “Bartik”) instruments, which average a set of shocks with exposure share weights.”

But shift-share analysis is much older. A quick search on Google Books turns up a bunch of titles from the 1970s and 1980s like “The Shift-share Technique of Economic Analysis: An Annotated Bibliography” and “浏览外网的免费加速器“.

Why the focus on Bartik (1991)? Goldsmith-Pinkham, Sorkin, and Swift, whose paper’s title is “Bartik Instruments: What, When, Why and How”, provide some explanation:

Potato官方最新中文版下载-Potato土豆聊天app手机版 ...:2021-6-7 · Potato是一款安全私密的免费聊天手机应用软件,又叫土豆或飞机,具有速度快、使用简单、安全免费的特点,提供私密聊天功能,可在聊天的两端设备同步删除记录,所有的聊天内容都能进行自动销毁,远离监视或侵扰,用户可无限量发送讯息、图片、影片及各类文档和媒体文件,且无大小限制 ...

I wonder what Tim Bartik would make of that last sentence. His 1991 book is freely available as a PDF from the Upjohn Institute. Here is his description of the instrumental variable in Appendix 4.2:

In this book, only one type of labor demand shifter is used to form instrumental variables2: the share effect from a shift-share analysis of each metropolitan area and year-to-year employment change.3 A shift-share analysis decomposes MSA growth into three components: a national growth component, which calculates what growth would have occurred if all industries in the MSA had grown at the all-industry national average; a share component, which calculates what extra growth would have occurred if each industry in the MSA had grown at that industry’s national average; and a shift component, which calculates the extra growth that occurs because industries grow at different rates locally than they do nationally…

The instrumental variables defined by equations (17) and (18) will differ across MSAs and time due to differences in the national economic performance during the time period of the export industries in which that MSA specializes. The national growth of an industry is a rough proxy for the change in national demand for its products. Thus, these instruments measure changes in national demand for the MSA’s export industries…

Back in Chapter 7, Bartik writes:

The Bradbury, Downs, and Small approach to measuring demand-induced growth is similar to the approach used in this book. Specifically, they used the growth in demand for each metropolitan area’s export industries to predict overall growth for the metropolitan area. That is, they used the share component of a shift-share analysis to predict overall growth.

Hence, endnote 3 of Appendix 4.2 on page 282:

This type of demand shock instrument was previously used in the Bradbury, Downs and Small (1982) book; I discovered their use of this instrument after I had already come up with my approach. Thus, I can only claim the originality of ignorance for my use of this type of instrument.

Tim once tweeted:

【Instagram电脑版下载】Instagram网页版:2021-6-3 · Instagram(国内简称Ins)是捕捉和分享世界精彩瞬间的简单社交方式。只需关注,即可借他人之眼来捕捉全球的精彩瞬间,并从中发现趣味。超过 10 亿用户选择了社交软件instagram,速速加入他们,用照片和视频拍下不一样的每一天,用影像传达个性符号吧!

Update (10am CT): In response to my query, Tim has posted a tweetstorm describing Bradbury, Downs, and Small (1982).

The rapid rise of spatial economics among JMCs

Two years ago, my list of trade candidates also included a dozen candidates in spatial economics. Last year I listed 20 candidates. There are 45 spatial-economics JMCs in this year’s list. That looks like a rapid rise.

Of course, measurement problems abound. My view of “spatial economics” may have broadened during the last two years, in which case the listings would tell you more about me than about the candidates. That would be hard to quantify. But, to focus on one label within the broader spatial economics nexus, I’m pretty sure that I’m seeing more candidates explicitly list “urban economics” as one of their fields than in years prior.

If I’m right that the supply of spatial economists is rising, then one immediately wonders if the demand side will keep pace. I haven’t looked at JOE postings, but I doubt that ads mentioning “urban economics” are growing at the same rate as candidates listing it as a field.

Last month, in response to a Beatrice Cherrier query about why urban economics’ “boundaries & identity are so difficult to pin down,” Jed Kolko noted that “urban economists typically align strongly to another field — trade, labor, PF, finance (esp the real estate types), macro.” That fluidity has advantages and disadvantages. It certainly makes it challenging to compile a list of relevant job-market candidates. But my very short time series of arbitrarily collated candidates suggests growth in the supply of young spatial economists.

Instagram将推出语音,视频聊天功能 - 云+社区 - 腾讯云:2021-3-2 · Instagram将推出语音,视频聊天功能。 如果这种情况很快发生,Instagram上的一个语音和视频聊天发布将会对Snapchat来说是一个糟糕的时刻,该公司受到了用户的批评,他们抱怨该应用最近的重新设计,声称它令人困惑,而且不像旧版本那样容易 ...

Here’s a list of job-market candidates whose job-market papers fall within spatial economics, as defined by me when glancing at a webpage for a few seconds. Illinois has six candidates! I’m sure I missed folks, so please add them in the comments.

The annual list of trade candidates is a distinct post.

Of the 45 candidates I’ve initially listed, 18 used Google Sites, 12 registered a custom domain, 3 used GitHub, and 12 used school-provided webspace.

Here’s a cloud of the words that appear in these papers’ titles:

Trade JMPs (2019-2020)

It’s November again. Time flies, and there’s a new cohort of job-market candidates. Time really flies: I started this series a decade ago! Many members of that November 2010 cohort now have tenure or will soon.

As usual, I’ve gathered a list of trade-related job-market papers. There is no clear market leader: the most candidates from one school by my count is three (Berkeley, Maryland, UCLA). If I’ve missed someone, please contribute to the list in the comments.

A separate post lists candidates in spatial economics, broadly defined.

Of the 31 candidates I’ve initially listed, 14 registered a custom domain, 9 used Google Sites, 2 used GitHub, and only 6 use school-provided webspace.

Here’s a cloud of the words that appear in these papers’ titles:

ins加速器苹果

坚果 官网

Software build tools automate compiling source code into executable binaries. (For example, if you’ve installed Linux packages, you’ve likely used Make.)

Like software packages, research projects are large collections of code that are executed in sequence to produce output. Your research code has a first step (download raw data) and a last step (generate paper PDF). Its input-output structure is a directed graph (dependency graph).

The simplest build approach for a Stata user is a “master” do file. If a project involves A through Z, this master file executes A, B, …, Y, and Z in order. But the “run everything” approach is inefficient: if you edit Y, you only need to run Y and Z; you don’t need to run A through X again. Software build tools automate these processes for you. They can be applied to all of your research code.

Build tools use a dependency graph and information about file changes (e.g., timestamps) to produce output using (all and only) necessary steps. Build automation is valuable for any non-trivial research project. Build automation can be particularly valuable for big data. If you need to process data for 100 cities, you shouldn’t manually track which cities are up-to-date and which need to run the latest code. Define the dependencies and let the build tool track everything.

Make is an old, widely used build tool. It should be available on every Linux box by default (e.g., it’s available inside the Census RDCs). For Mac users, Make is included in OS X’s developer tools. I use Make. There are other build tools. Gentzkow and Shapiro use SCons (a Python-based tool). If all of your code is Stata, you could try the project package written by Robert Picard, though I haven’t tried it myself.

A Makefile consists of a dependency graph and a recipe for each graph node. Define dependencies by writing a target before the colon and that target’s prerequisites after the colon. The next line gives the recipe that translates those inputs into output. Make can execute any recipe you can write on the command line.

浏览外网的免费加速器

I have written much more about Make and Makefiles in Section A.3 of my project template. Here are four introductions to Make, listed in the order that I suggest reading them:

What’s an “iceberg commuting cost”?

In the recent quantitative spatial economics literature, the phrase “iceberg commuting cost” appears somewhat often. The phrase primarily appears in papers coauthored by Stephen Redding (ARSW 2015, RR 2017, MRR 2018, HRS 2018), but it’s also been adopted by other authors (Fratto 2018, Gaigne et al 2018, ins加速器苹果). However, none of these papers explicitly explains the meaning of the phrase. Why are we calling these commuting costs “iceberg”?

The phrase was imported from international economics, where the concept of “iceberg transport costs” is widely used. That idea is also explicitly defined. Alan Deardorff’s glossary says:

A cost of transporting a good that uses up some fraction of the good itself, rather than other resources. By analogy with floating an iceberg, costless except for the part of the iceberg that melts. Far from realistic, but a tractable way of modeling transport costs since it impacts no other market. Due to Samuelson (1954).

Two bits of trivia that aren’t very relevant to the rest of the post: these should be called “grain transport costs” because von Thunen introduced the idea with oxen-pulled grain carts more than a century before Samuelson (1954) and basic physics means there are actually economies of scale in shipping ice.

Why do we use the iceberg assumption? As Deardorff highlights, it lets us skip modeling the transportation sector. By assumption, the same production function that produces the good also produces its delivery to customers. For better or worse, that means that international or long-distance transactions don’t affect factor demands or transport prices by being international or long-distance per se (Matsuyama 2007). This is one way of keeping trade models simple. Per Gene Grossman: “few would consider the ‘iceberg’ formulation of shipping costs as anything more than a useful trick for models with constant demand elasticities.”

In urban economics, saying that commuting costs take the “iceberg” form means that the model abstracts from transportation infrastructure and the transport sector. Commuters “pay” commuting costs by suffering lower utility. There is no supplier of transportation services that earns any revenues. (Given that most US roads are unpriced, this isn’t much of an abstraction.) But, just as folding transportation services into the goods-producing firm’s production function has consequences for trade models, saying that commuting enters the utility function directly has consequences for the economic content of urban models.

Given that these models do not feature a labor-leisure tradeoff, there is an equivalence between utility costs and time costs. As described by Ahfeldt, Redding, Sturm, and Wolf (2015): “Although we model commuting costs in terms of utility, there is an isomorphic formulation in terms of a reduction in effective units of labor, because the iceberg commuting cost enters the indirect utility function (5) below multiplicatively.” If the cost of commuting is mostly about the opportunity cost of time, then this modeling device captures it reasonably well in a model with homogeneous workers.

If workers are heterogeneous in their hourly wages, then their opportunity costs of time differ. Higher-wage workers have higher opportunity costs of time. In the classic model of locational choice (see Kevin Murphy’s lecture), this causes higher-wage workers to be more willing to pay for residential locations that give them shorter commutes. In the typical quantitative spatial model, however, preferences are Cobb-Douglas over housing and a tradable good. As a result, even with heterogeneous agents, the utility-cost and time-cost formulations of commuting costs are equivalent.

But what if commuting costs are paid with money? In addition to more time on the road, driving a greater distance involves burning more fuel. (Actually, in these models, it typically involves burning more of the numeraire good.) This is not equivalent to the utility formulation, because the cost of a tank of gas is not a constant proportion of one’s income. Moreover, if the car itself costs money, then lower-wage workers might take the bus. The monetary costs of accessing different commuting technologies can have big consequences for urban form, as suggested by LeRoy and Sonstelie (1983), Glaeser, Kahn, and Rappaport’s “hi加速器永久免费:2021-6-4 · 银河加速器... 下载 绿叶加速器 绿叶加速器... 下载 游戏 应用 500达人管理工具 下载 大象直聘 ... ©2021-2021 苹果软件 免责声明:本站资源均由网友投稿推荐而来,其著作权归原作者所有 如有侵犯到您权利的资源,请联系我们,我们将及时撤销相应 ...” and Nick Tsivanidis’s paper on bus-rapid transit in Bogota. The iceberg formulation of commuting costs cannot tackle these issues.

Similarly, even though transportation infrastructure is surely more capital-intensive than much of the economy, we cannot speak to that issue when we parsimoniously model transport as simply coming out of people’s utility.

“Iceberg commuting cost” is a short, three-word phrase. I hope the 600+ words above suggest what it might mean.