The Science of Science Funding

Last week, I flew to Boston to attend a two day conference on the Science of Science Funding, bringing together economists, policymakers, institutional and private funders, and me and Cindy. The conference was hosted by the National Bureau of Economic Research, and this particular meeting is now in its 3rd year. Cindy attended the first two years, and this was the first time I got to attend.

I studied economics in college, and going to a NBER meeting has been on my bucket list for awhile, so I was lowkey excited. It was a very jam packed meeting with presentations or a paper or project, followed by presentations of reviewer comments, and then a short discussion section with the audience of 50 people. There was a good mix of empirical data, theoretical modeling, and policy discussion.

The main purpose of the meeting is to create a space that would bring together funders and economists, and the focus of the sessions included measuring the impact of science funding, connecting research to policy outcomes, risk-taking in science funding, and addressing biases in selection decisions (e.g. gender).

Some of the funders represented included Sloan, Wellcome, Gates, NIH, European Research Council, the Michael Smith Foundation, Novo Nordisk Foundation, and Experiment. There was also the head of the USPTO, as well as economists from all across Europe and South America.

I learned a lot on the short trip! I also got a chance to finally meet two researchers, Chiara Franzoni and Henry Sauermann, and discuss a potential experimental collaboration with them to study the risk behavior preferences of crowdfunders on Experiment.

The main overall takeaway from the meeting is that the way we fund science today could be much more scientific. Which is pretty obvious to anyone who's ever applied for science funding.

Here are some other takeaways.

Rules over discretion

In other industries, it's becoming increasingly common to see practices like blind code review, blind job interviews, and even blind symphony orchestra auditions as methods to reduce unintentional bias. However, if men and women communicate differently even in the way they write grant applications, then some of these practices might not be enough.

One study looked at word choice patterns between men and women applicants for the last 3 years of the Gates Grand Challenges program. They found that women and men used different writing patterns in grant proposals. Men are more likely to be overconfident and use broader words, where women may be more likely to use narrow words (i.e. 'topic-specific').

Probably the most important paper of the meeting examined this by modeling the two main types of allocating funding resources: fixed budgets versus proportional budgets. Fixed budgets are set from the "top down" based on field-by-field budgets ex ante, like at NSF. Proportional budgets are based on need and the overall number of applications, "bottom up" per se, like they do in the European Research Council and Canada.

After a lot of applied math and supply-demand curve drawing, they demonstrated that there is some 'unraveling' from unstable equilibriums. This leads to unintended consequences.

The intuition is like this: imagine you are in a room with 100 of the top scientists in your field. You all apply for a grant with only 10 spots. Think for a second in this room if you are in the group of the top 10 scientists. For most people, you're thinking "most likely not". So, knowing you're not likely in the top-10 and to receive the grant, you're less likely to apply. This pushes people towards not applying.

Now say, there's a new change introduced where only the top 3 proposals receive a grant, but the other 7 are now determined by a roll of the dice from the rest of the applicants. Are you more likely to apply knowing that there is now a slight chance that you might receive a grant award? Yes.

When you introduce more noise, you generate more grant applications.

There's a lot of implications from this: on the one hand, how likely are people to know their own type ("am I in the group of the top 10 applicants and likely to win?"), considering the fact that men are more likely to be overconfident in determining their own type.

The other bigger implication is that by adding little bits of noise through things like ambiguous 'reviewer discretion' (e.g. having evaluator quality diversity in the reviewer selection panel, aka some good reviewers, some terrible reviewers) in funding decisions, you end up changing the applicant pool. I.e. if people know that the reviewer quality will be very diverse, it will encourage them to apply.

The best way to avoid unintentional biases in funding decisions is to favor rules over discretion - rules about the application process, reviewer selection and training, selection standards, and applicant pools.

Economist Reinhilde Veugelers mentioned that the ERC council in 2014 switched to a proportional budget system based on the idea of a payline - so that every applicant in every field should have equal likelihood of getting funded so that no one field is favored over another. After making the change, they noticed 4 years later that the budget spend evened out. Social sciences/humanities funding percentage as overall spend in ERC grew from 17% to 23% and the number of applicants grew by a lot.

In the end, as a funder you have to think carefully about how you are doing panel construction so that you reduce diversity in your reviewers. If it's too diverse (in certain dimensions), the quality of evaluation may go down. Also, consider reducing the cost of applying and in doing so increasing the incentives to apply (e.g. less time consuming, add budgetary supplements for grantees).

Everyone wants high-risk, high-reward

One thing I noticed was that everyone seemed to be in agreement that pursuing high-risk, high-reward research is the desired outcome, as the phrase came up often in terms of the goal to design and optimize for.

Common cited literature suggests that industry is investing less and less in science (Arora 2018), as well as pressure from the scientist side of the equation where the consequences of failed research are becoming harsher: soft money, publish or perish, and the spread of tenure track systems to other countries that previously did not have them. This is pushing everyone towards lower-risk, lower-return science activities, aka the dreaded incremental science. It seems no one wants to fund low-risk, incremental science.

This led to a discussion that different agents in science treat and view risk differently. In theory, policy makers see risk in a speculative sense, to try and make positive growth investments in society. Program officers and funders see risk as speculative, but also from a risk minimizing point of view (e.g. diversification, minimize bias through procedure). This has translated into recent real world efforts by NIH to pursue high-risk, high-reward programs like the NIH Pioneer Award as well as Transformative R01.

"Impossible to know the counterfactual"

This phrase was pretty common in presentations. One of the challenges of studying science funding is that it's difficult to treat it like an experimental science simply because it's difficult to run actual experiments in science funding. Most of the studies presented were backwards looking, particularly in studying past award patterns and trying to find correlations with 'productivity' or 'value generated' - e.g. how many grants turned into patents.

Essentially, the challenge is to ask something like, "would any of this research have happened without Wellcome/Gates/Experiment funding". There were some clever data approaches to try and counteract this challenge, but effectively unless you're running controlled experiments with scientists' limited funding, it's hard to know for certain what policies work best.

Self-citations and data clarity problems

One common complaint among the papers presented was that there's some level of data disambiguation that can't be done just using typical bibliometrics approaches. For example, citations between grant proposals, papers, and patents are links, but they aren't qualified in terms of whether the citation was in fact saying the source played a major positive influence, or even if the citation was negative or positive.

Another common complaint was accounting for self-citations. It seemed that the projects that tried to study large amounts of citations ran into lots of false positive and false negative problems.

It feels like everyone is still at the stage of describing mechanisms but not quite understanding the behaviors underneath and why those behaviors persist. For example, the fact that women are less likely to self-cite in grant proposals than men should discourage us from relying too much on policy drawn from citation-based research.

Tools-as-a-service

A few papers presented new services to help scientists and funders study the impact of their research by creating new public databases for use, such as the new Wellcome Trust tool to show how research and funding is tied to new policy and legislative outcomes.

I noticed that the dominant mental model for new public tools is still to aggregate, centralize, and serve. It would be cool to see things like decentralized standards and interfaces instead of single tolled networks with a dependence on that agency.

Meme of oldness

I have a theory that economists love prestige because it's difficult to game, and this was amusingly reaffirmed during the meeting. Nearly every presentation started with some intro slide with a photo of some foundation's founder and the year it was started. It turned into a slight competition to see which funder or foundation was the oldest, and

AI, ML, and OCR

One trend I noticed was a growing interest in using ML, OCR, and other new analytical tools to sort through all of the science funding that's out there. The first paper presented created a new dataset linking patents to citations dating all the way back to 1926.

It seems like there's going to be a lot of analytical capacity and work that will be done mining data that was previously only human readable.

Reliance on bibliometrics vs. direct consumption of science

The most obvious pattern I noticed is that it seems like most people share the problem of not having access to enough data types - data like how research budgets are spent, user behavior data, or qualitative data.

Because of this, it seems like most people are just reaching for the data that does currently exist or is easily attainable - and that's primarily unsophisticated bibliometric data they can scrape. Until we expand the kinds of data on offer, all of the studies are going to feel limited and backwards-looking.

Only one presentation mentioned the understudied value of individuals directly consuming science, like utility from reading a book or watching a movie. Until we can devise other kinds of measureables, e.g. around "non-rational" outcomes of science funding like science communication and young scientist training, then IMO funders are going to be hamstrung. But this also might be a reflection of funders' priorities in seeing science as a speculative activity for growth and industry, rather than in softer measures for society at large.

Working papers model is unique

I really enjoyed the format of the conference, which is based on talks around working papers. Apparently, this is a practice that's common in economics, where researchers will openly share their progress and have intense discussions about the paper, before it's been peer-reviewed and formally published.

Part of this is because papers in economics take longer than in other fields. It seems like this is also because there's way less fear of being scooped in economics, than say life sciences or other competitive fields.

It made for a really stimulating atmosphere - no one was angry or rude in critiquing content, everyone was very friendly, collegiate, and nice 😙.


That's all for this year, next year we hope to have some new experimental findings studying risk-behavior. We're hoping to collaborate with some economists to see if science crowdfunders choose projects to fund differently than scientific experts, which I am looking forward to.

My raw notes from the meeting


Highlighted papers:

Resource Allocation across Fields: Proportionality, Demand Relativity, and Benchmarking. Marco Ottaviani [link]

Government-funded research increasingly fuels innovation. L. Fleming, H. Greene, G. Li, M. Marx, D. Yao [link]

Truly Legendary Freedom: Funding, Incentives, and the Productivity of scientists Matthias Wilhelm [link]

How Research Affects Policy: Experimental Evidence from 2,150 Brazilian Municipalities Jonas Hjort, Diana B. Moreira, Gautam Rao, Juan Franc. Santini [link]

Back to other posts