ADEMU final conference – a detailed review

Ademu’s final conference took place at the European University Institute in Florence on the 9th   and 10th May 2018. During the conference, the results of the three-year Horizon 2020 project were debated and analysed by Ademu coordinators and esteemed economists, academics and legal experts. Gergő Motyovszki and Jan Teresiński have provided a detailed review of the proceedings.

The objective of the Ademu research project was to reassess the overall fiscal and monetary framework of the EU and, in particular, the euro area. The Ademu team spent three years examining the legal and institutional framework of the European Economic and Monetary Union (EMU). A group of experts, brought together from leading European institutions, conducted a rigorous investigation of risks to the long-term sustainability of the EMU, and developed detailed institutional proposals aimed at mitigating these risks. The findings and main results of the project were discussed at Ademu’s final conference at the European University Institute in Florence on 9-10 May 2018, the proceedings of which follow. You can see the full programme here.

A detailed eBook detailing the project’s findings has been produced in conjunction with voxEU.org: The EMU after the Euro Crisis: Lessons and Possibilities – Findings and proposals from the Horizon 2020 Ademu project

You can see a summary of the project and its findings in our end of project magazine: The Ademu Project: Findings and Proposals.

WELCOME

Ramon Marimon (scientific coordinator of the Ademu project, European University Institute, UPF-Barcelona GSE, CEPR), opened the conference by explaining the history of Ademu, which was a response to a call for research at the outset of the Horizon 2020 programme, and confirming that it had delivered its goals. He also noted that Ademu has made extensive contributions to economic and legal research literature, adding value for taxpayers.

Marianne Paasi (European Commission, DG Research & Innovation), welcomed the participants and thanked the organising and participating institutions, expressing her pride for Ademu. With the best macroeconomists on board it not only had a great promise to deliver something interesting, she noted, but also contributed to training the next generation. The challenges facing Europe as identified by the EU and the European Commission were translated into research questions and addressed by scientific methodology within Ademu, resulting in very useful results for policymakers. She also referred to the next multiannual budget of the European Union,  where the results of Ademu are likely to help the goal for economics research to have a proper share in European Research Council grants.

SESSION 1:
In the aftermath of the euro crisis: lessons and dealing with the debt overhang

Presentation:

  • Giancarlo Corsetti | University of Cambridge and CEPR

Discussants:

  • Henrik Enderlein | Hertie School of Governance & Jacques Delors Institut
  • Christian Hellwig | Toulouse School of Economics
  • Juan Rojas | European Stability Mechanism

Giancarlo Corsetti: “In the aftermath of the euro crisis: lessons and dealing with the debt overhang”

Corsetti summarised the recent experience of the euro area, the underestimation of risk before the crisis and the risk polarisation across member states. While acknowledging that a well-functioning currency union must allow for sovereign risk differentials, it cannot be stable unless credible institutions and policies anchor those differentials to economic fundamentals and rule out self-fulfilling confidence panics.

Risk polarisation which is detached from fundamentals interferes with the transmission of monetary policy by not allowing interest rate cuts to be felt the same way by households and firms in different member states. It limits the scope for countercyclical fiscal policy as larger deficits are likely to put a country in the riskier group, resulting in an insufficiently accommodative macroeconomic policy stance in the euro area during the crisis and hindering the recovery. It also led to a highly asymmetric debt distribution, with the debt overhang likely to cause persisting differences across countries.

Corsetti outlined three tasks to address these issues: the need for a monetary backstop; assessing the need to provide accommodative fiscal stance in the face of large shocks; designing an official lending institution which can be relied upon to enhance debt sustainability.

With regard to the monetary backstop function, he noted the difference of the euro area (EA) countries with countries such as the US or UK, who have their own central bank. When a country joins a currency union, the monetary backstop function for a country’s government debt provided by the common central bank might not be credible.

In the presence of risk polarisation, this might give rise to self-fulfilling confidence crises and multiple equilibria as risk premiums get detached from fundamentals, adversely affecting the fiscal situation and validating the initial expectations – unless the central bank stands ready to eliminate those unfounded, belief-driven runs. Until 2012, he said, it was not clear whether the ECB would be willing to fulfil this role, but since the announcement of the OMT program, the situation has largely been resolved. He outlined the main messages of his theoretical model, which predicts that if the central bank is able to purchase government-debt at the default-free rate, purchases may not be necessary: what matters is a credible promise to be willing to purchase. In order to rule out moral hazard issues, however, access to the monetary backstop function needs to be conditional which conditions generally don’t support accommodative fiscal policy.

In a liquidity trap, with monetary policy constrained by the ZLB, the support of fiscal expansion is essential for macroeconomic stabilisation. In order to de-link the fiscal response from risk polarisation, he proposed official lending by a euro area fund able to issue non-defaultable bonds, and to which member states’ access is regulated by meeting flexible fiscal criteria (allowing for counter-cyclicality while ensuring long-term sustainability).

On the role of ESM providing official lending, Corsetti outlined the results of a model, calibrated to match the Portuguese experience, which examined the trade-off between risk reduction and risk sharing. He showed that official lending regimes can raise the amount of sustainable debt without default to address debt overhang and help the recovery. Longer maturities seemed to matter more than below-market interest rates but higher average debt levels will also cause defaults occurring at lower debt levels in response to adverse developments in exogenous fundamentals (defaults become more likely). He concluded by emphasising the need for designing time-consistent risk-sharing policies and institutions.

Discussion by: Henrik Enderlein

Enderlein took a rather critical approach. In his opinion, too much emphasis on the monetary backstop function is dangerous, as this puts the legitimacy of the ECB at stake: the central bank should not be overburdened. He asked whether it is the OMT program or quantiative easing (QE) which provides the monetary backstop in the euro area. The two are different in terms of symmetry, focusing on rates versus spreads, conditionality or whether they aim to stabilise the business cycle versus repairing the monetary transmission mechanism. The answer also has implications for free-riding problems or debt-sustainability. He criticised Corsetti for not mentioning QE and focusing on OMT.

His second critical point was that the no-bailout clause and debt-restructuring is completely missing from Corsetti’s chapter in the VoxEU eBook which was produced to explain Ademu’s findings. It was not clear, he said, whether it is de facto ‘dead’, or automatically assumed to happen – or the issue is just ignored. He suspected the third option.

He outlined two proposals taken from the 7+7 Franco-German report (of which he was a co-author). The no bailout rule needs to be made credible (official lending cannot go to countries with unsustainable public finances unless they go through debt restructuring), and announced in combination with other risk-sharing policies. He mentioned a proposal for junior bonds, which a country must automatically use to finance spending beyond a certain limit. He also called for a credible safety-net that doesn’t rely on a direct monetary backstop. Within this framework, sovereign bond concentration in banks’ balance sheets must be regulated, while there is a need to offer a safe asset for the financial sector.

Discussion by: Christian Hellwig

Hellwig also took a critical approach to economic modelling and model based policy advice. He outlined two very contrasting world views about the euro crisis: the ‘Southern view’, focusing on financial market imperfections and multiple equilibria and the resulting need for more risk-sharing, and the ‘Northern view’, which looks at these remedies as sources of moral hazard and a zero-sum transfer union. Microeconomic models can support both of these world views but what does macroeconomics bring to the table? Policy proposals are based on specific models which might serve the modeler’s viewpoint. While scientific discussion takes it for granted that we agree on a common model to use (policy coordination), in real world policymaking this is rarely the case. He mentioned the starkly different narratives of Wolfgang Schauble and Yanis Varoufakis. In certain cases it might be very difficult to find common ground; there is, he said, far too little work in economics identifying policies which are compatible with multiple world views. These issues are further pronounced in the case of fiscal and debt policies as they are subject to democratic oversight, and this entails having to find common ground among many voter groups.

Discussion by: Juan Rojas

Rojas noted that debt overhang is a serious issue and an important challenge, as well as remarking how little we still understand about how financial investors make their decisions. In particular, credit rating agencies might have serious effects by triggering runs, negative spirals and self-fulfilling dynamics, which could be another diabolic loop. The goal is to stop and block these loops early on.

Corsetti responded:

These issues are clearly relevant now in Europe and we need a model which is well-equipped to address them. He reminded participants that he was reporting findings and results to help our understanding, rather than giving direct policy advice.

SESSION 2:
A European Stability Fund for the EMU & Agreeing to an Unemployment Insurance System for the Euro Area?

Presentation:

  • Árpád Ábrahám | European University Institute
  • Lukas Mayr | European University Institute

Discussants:

  • László Andor | Former European Commissioner for Employment, Social Affairs and Inclusion, Senior Fellow at Hertie School of Governance
  • Aitor Erce | European Stability Mechanism
  • Coen Teulings | University of Cambridge

Árpád Ábrahám: “A European Stability Fund for the EMU”

Ábrahám presented his joint work with Eva Cárceles-Poveda, Yan Liu and Ramon Marimon on the European Stability Fund (ESF). He indicated four related issues associated with strengthening the euro area: risk sharing and stabilisation policies in normal times, dealing with severe crises, resolving a debt crisis (the euro `debt overhang’) and developing ‘safe assets’.

The ESF – which is a constraint efficient risk-sharing mechanism – should be able to deal predominantly with risk sharing but should also help to solve all of these problems since it treats the EU as a long term self-enforcing partnership which is not a federal state. The Fund takes into account that it cannot affect policies at country level, yet needs to provide incentives so that the partnership is self-enforcing. As the ECB managed to solve the time inconsistency problem of monetary policy in the euro area (competitive devaluations), the ESF should be able to deal with time inconsistency in fiscal policy (procyclicality). The ESF contract is based on risk assessment of the country and defines state-contingent transfers as opposed to unconditional debt contracts. It induces countercyclical fiscal policy, provides risk-sharing and builds on ex post conditionality as opposed to current ex ante eligibility (‘austerity programs’ conditions).

The ESF’s design takes into account several constraints which include the sovereignty constraint (participation constraint) – the sovereign can always exit the partnership, the redistribution constraint – risk-sharing transfers should not become ex post permanent and the Fund needs to always stay solvent, the moral hazard constraint (incentive compatibility constraint) – the severity of the shocks may depend on which policies and reforms are implemented which also affects the lending conditions, the asymmetry constraint – no ex ante veil of ignorance, countries enter the Fund being asymmetric and may start with large debt liabilities, and the funding constraint – the Fund should be (mostly) self-funded by rising own assets.

The assessment of the ESF is based on the comparison of two regimes: incomplete markets with costly default (IMD) in which debt is not state contingent and where by exerting costly effort countries can reduce the probability of adverse shocks (corresponds to the current situation) and the regime with the ESF operating. The model is calibrated to five euro area stressed countries – Greece, Ireland, Italy, Portugal and Spain. The comparison suggests that the ESF results in more consumption smoothing, countercyclical fiscal policies and very low (or even negative) government bonds spreads. The Fund provides a high capacity to absorb severe shocks and existing debt. The transfers are conditional not just ex ante but conditionality is a persistent feature as credit terms exhibit history dependence. If a country is in a good state, it is asked to repay its debt faster.

Incentive compatibility is ensured by the fact that the terms of the contract are getting worse if the agents exert less effort. The Fund allows for higher capacity to raise debt and much more insurance (due to more consumption smoothing) comparing to IMD, which results in huge welfare gains.

Lukas Mayr: “Agreeing to an Unemployment Insurance System for the Euro Area?”

Mayr presented his joint work with Árpád Ábrahám, Joao Brogueira de Sousa and Ramon Marimon on the Ademu proposal of the European Unemployment Insurance System (EUIS). He indicatted that the idea is not new and dates back to the Marjolin Report (1975).

Since then, several policy proposals have been suggested. It is costly for countries to deliver unemployment insurance during recessions, while business cycles in the euro area are not perfectly correlated which provides a scope for risk sharing. However, such a common European system might result in cross-country transfers due to large differences in unemployment rates and labour market flows in the euro area countries. European labour markets are heterogenous in terms of job finding and separation rates, unemployment duration and replacement rates of benefits.

Whether the EUIS is beneficial for countries involved and whether unanimous agreement for replacing current systems is achievable depends on the system design. The model in which different policy options are evaluated (in terms of risk sharing benefits) features heterogenous agents and endogenous labour supply. The EUIS provides support when large negative shock hits and increases unemployment (a member state enters unemployment recession alone). In that case the system will yield welfare gains which will be small but positive for all member states. Benefits will go mainly to the employed since there will be less need to increase taxes to finance rising unemployment benefits (taxes will be smoother).

The optimal unemployment system is remarkably similar across countries – with the replacement rate of 20-45% and a very long duration of eligibility. However, national governments do not internalise general equilibrium effects of their reforms on citizens in other countries. When general equilibrium effects are taken into account, optimal reforms are welfare worsening: too generous policies lead to lower savings and capital stock, which depresses wages. Hence, the system needs to be centralised – in that case a fully harmonized EUIS will feature the replacement rate of 15% with unlimited duration, financed with a common wage tax. This will have different results in different countries – besides net recipients who always gain there would be net contributors, some of which will gain, some will experience losses. With country-specific taxes (so that cross country transfers are neutralised), however, every member state can gain. In addition, rising taxes for the EUIS contribution provide incentives for member states to reform their labour markets.

Discussion by: László Andor

Andor agreed that this idea is not new and dates back to the 1970s when the European Community was looking for fiscal capacity stabilisation. After the Maastricht Treaty, the discussion on the issue was more silent and recently was brought back again by the crisis. Recently many proposals has been discussed especially during the Italian and Slovakian presidency of the Council of the European Union. He indicated that there are different options for automatic stabilisers: income support based on output gap (lacking social focus), reinsurance of national unemployment insurance funds (transfers triggered by major crises) and partial pooling of unemployment insurance systems. Both reinsurance and partial pooling deliver economic, social and institutional stabilisation but require acceptance of limited transfers and harmonisation of labour markets. Key issues in the debate of the EUIS is whether fiscal capacity is the next step in the EMU reform, whether transfers will be accepted and how to account for moral hazard, what degree of harmonisation is needed and how to finance the system (via European payroll tax, GDP-based contribution or even levy on CA surplus). What also matters is the borrowing capacity and the role of social partners in the system.

Discussion by: Aitor Erce

Erce pointed out that the EMU needs both more risk sharing and a robust crisis-solving mechanism. The ESF can deliver both in one stroke, while the ESM deals only with large shocks. The ESF also addresses moral hazard issues by delivering incentive-compatible financing and allows for transforming risky debt into safe assets. However, what seems missing in the current version of the ESF proposal is a discussion how the contingent contracts differ from current system is place in terms of conditionality and renegotiable official loans. How the optimal level of effort is determined needs to be better explained. Another issue is also whether the ESF reduces the need for a sovereign debt restructuring mechanism and the question of the need for trust between long term partners. Commenting on the EUIS, Erce indicated that replacement rates are very low in the system and that duration and severity of unemployment is also due to the productive structure (construction sector), which is why the question is how inclusion of housing in the model would affect the results. More discussion on the implementation should be also included in the paper.

Discussion by: Coen Teulings

Teulings referred to several problems Europe is claimed to be facing: lack of investment in R&D, unequal access to education, too high public deficits, and too high deficits in the south. Deficits in the eurozone, he said, are not larger than in other industrialiszed countries. Still, the euro zone is facing debt overhang that can be solved by the ESF which massively outperforms incomplete markets with default. The question is why it is so – is it because it helps containing moral hazard? He indicated that in the paper there is an excess of math without much intuition, especially in explaining what negative spreads in good times mean. It should be also explained what debt is in this context – total debt or external debt and whether debt is only smoothing or also structural. Another issue is whether the system is a general mechanism for any countries’ club or specific to a monetary union. In the latter case where does the monetary union considerations come in? It is also not clear why effort does not depend on debt level. The paper should also discuss how far the current ESM is from the proposed ESF.

Árpád Ábrahám’s response:

Ábrahám pointed out that effort in the model is a black box and includes many dimensions of policy. Effort depends on debt in equilibrium, and by increasing effort (government reforms) a country may reduce its misfortune – reducing the probability of negative shocks and improving financing conditions. The model was calibrated to the world before the ESM, afterwards the data are too short to be used for calibration. The conditionality of the contracts ex post means that they depend on history and are adjusted period by period depending on the changes in shock conditions.

Lukas Mayr’s response:

Mayr indicated that although the proposed EUIS has a low replacement rate it also features high duration. In order to increase the incentives to look for a job one needs to reduce the replacement rate – there is a trade-off between incentives and insurance. There is no need to harmonise labour market institutions in order to achieve gains, but such structural reforms bring benefits to countries through having to pay lower contributions

SESSION 3:
Reassessing tax policies and tax coordination: the case of a tax on automation

Presentation:

  • Pedro Teles | Catolica Lisbon School of Business and Economics, Banco de Portugal, CEPR

Discussants:

  • Charles Brendon | University of Cambridge
  • Jordi Caballé | UAB, Barcelona GSE
  • Rody Manuelli | Washington University in St. Louis

Pedro Teles: “Reassessing tax policies and tax coordination: The case of a tax on automation”

Teles described his paper ‘Ramsey Taxation in the Global Economy’, where he and his coauthors explore optimal taxation in an open economy context. The main conclusions reveal that no restrictions should be placed on free trade, nor on capital mobility, while capital should be taxed at a zero rate.

A minimal set of instruments to implement the Ramsey allocation are consumption and labour income taxes. He referred to the seminal result by Diamond and Mirrlees (1971): if all net trades can be taxed at different rates, then there should be no taxes on intermediate goods, including on trade, on cross-country capital allocation or on capital accumulation. Different taxation of all net trades in effect means taxing different labour at different rates, however, this is inconsistent with tax harmonisation.

They also explore alternative specifications, where assets are taxed. They compare setups where internationally traded goods are taxed according to destination versus their origin and look at value-added taxes (VAT), with and without border adjustments. In the former, case exports should be taxed (at a uniform rate), and imports should not, while in the latter vice versa. He noted that according to the Lerner-symmetry taxing imports or exports should be equivalent and border adjustments should not matter, but the Lerner-symmetry breaks down in a dynamic context and with multiple goods, as the intertemporal price of tradeable goods could be distorted with a time-varying tax.

Teles also discussed the winners and losers of the globalisation process, and the need to assess how to compensate the losers efficiently through the tax system. In particular, he focused on the potential adverse effects of automation through losing routine jobs to robots, which are the intermediate goods used to help automation. It is crucial, he said, to ensure that the benefits of innovation are not confined to a small portion of the population (those who are born with non-routine skills), but are also shared with the losers (routine workers substituted out by robots).

However, the first question arising in connection with a “robot tax” is that how can this be reconciled with previous results on the optimality of free trade and no taxation of capital or intermediate goods. Recalling again the Diamond-Mirrlees result, if different types of labour (routine versus non-routine) could be taxed at different rates, then there would be no need to tax robots, thereby distorting production.

However, in the real world this would constitute as tax discrimination, which is why taxing robots might still need to be part of the fiscal policy mix, even if it distorts production. A tax on robots is in effect a tax on the non-routine labour, and a subsidy for substitutable routine workers.

Examining the same question within the Atkinson-Stiglitz framework, a robot tax might be justified on the grounds that it relaxes information constraints: by increasing the pre-tax wages of routine workers, it is not optimal for non-routine types to imitate routine workers, who do not need to work as much/hard.

This potential robot tax would range between 10% and 40%, depending on how much the real world tax system is restricted. In the current tax system, however, even with the higher robot tax and more progressive labour taxes, routine workers could still be made relatively very poor by automation, so other remedies might be needed.

Discussion by: Charles Brendon

According to Brendon, the main take-away was that the ease of international tax coordination and optimal policy design depends crucially on the fundamental frictions governments face. A planner typically wants to maximise an objective function subject to the resource constraint of the economy and some distortions in goods and labour markets or the information structure of the economy. Without the latter, the problem is quite trivial, and based on the Second Fundamental Theorem of Welfare Economics lump sum taxes would suffice to replicate any allocation.

The key question, he said, is what kind of distortions we take into account during our tax design. In the first paper, the main distortions are constraints on consumer wealth which put a limit to lump-sum taxation, so we have to resort to labour and consumption taxes (based on inverse elasticity rules). If this is the key constraint, there is no need to distort the production process, and the paper of Teles et al, teases out the implications for the global economy in different setups. However, the paper assumes global coordination on tax policy, which might be hard to sustain.

Regarding the second paper, Brandon noted the main friction is the non-observable heterogeneity of workers (routine versus non-routine) which introduces information constraints. The need to resort to distortive robot taxes leads to a redistribution-efficiency trade-off, but it is a quite complex one: taxing robots not only reduces pre-tax wage inequality at the cost of breaking production efficiency, but also eases income tax distortions, since there is now less need to use progressive labour income taxes. He would also address the issue of global coordination on robot taxes.

Discussion by: Jordi Caballé – “Disclosure of Corporate Tax Reports, Tax Enforcement, and Insider Trading

Caballé introduced one of his own papers related to the topic of taxation which studies whether it is desirable to make a firm’s tax statements public in an environment with insider trading, a recurring policy issue in the United States and Europe. While there is some empirical evidence on the topic, theoretical contributions are rare, a gap the authors aim to fill.

Firm managers have an incentive to misreport their taxes, while the tax authority has incentives to make tax returns public. He briefly presented the model setup and concluded with the main takeaways. If tax returns are disclosed, then a strategic manager will never report high profits truthfully. If tax reports are confidential, then the manager reports truthfully with some positive probability (given that auditing costs are not too high). With high auditing costs, the manager always submits a false report, which are then completely non-informative. The key trade-off is if tax report disclosure improves financial market performance (by constraining insider trading), it might affect tax revenue negatively. Thus, the issue should be regulated by law instead of discretionary decisions.

Discussion by: Rody Manuelli

Manuelli mainly discussed the first paper presented by Teles and praised it for exploring the main issues regarding optimal taxation. In his opinion, looking at the cooperative solution is a good (if unrealistic) starting point. He introduced Ramsey Taxation as a problem where in the right setup (N goods and N taxes) lump-sum taxation is always the best, while we might be able to choose the correct N-1 taxes, but with N-2 it becomes a mess, where anything can happen. In a very basic setup it is possible to eliminate intertemporal capital taxes by setting consumption taxes right, but this intuition might not be robust to including human capital in the model. Then all taxes should be zero in the steady state and a social planner would have to raise revenue during the transition, out of the income of accumulated assets. With non-standard preferences and more restrictions faced by the Ramsey planner, introducing intertemporal distortions might be optimal.  On the second paper, he asked what a robot exactly is. If it is just a form of capital, then according to the results from the first paper, it should not be taxed. He concluded by saying that the paper delivered the consistent finding, that Ramsey tax systems can only maintain production efficiency if there are no restrictions on taxing different trades differently. He asked whether tax harmonisation is feasible in the real world, and called for further work on non-cooperative solutions.

SESSION 4:
Macroeconomic Stabilization, Fiscal Consolidation and Recessions with Heterogeneous Agents

Presentation:

  • Morten Ravn | University College London 
  • Evi Pappa | European University Institute

Discussants:

  • Axelle Ferriere | European University Institute
  • Kurt Mitman | Stockholm University

Morten Ravn: “Macro Stabilization: a HANK+SAM perspective”

Ravn noted how the standard New Keynesian DSGE models with representative agents dominated the pre-crisis macroeconomic literature and policy work. By combining empirically relevant goods and labour market frictions with the theoretical rigor of micro-founded macro models, these models became very influential. However, the crisis revealed some flaws in the framework: Citing Janet Yellen, he mentioned several challenges on our understanding of the economy, such as the effect of heterogeneity on aggregate demand, the potentially persistent/permanent effects of aggregate demand on supply capacities, and the puzzling case of missing deflation at the Zero Lower Bound. According to him, a new modelling framework can address these concerns about the Representative Agent New Keynesian (RANK) model.

This is the HANK+SAM framework, which combines a Heterogeneous Agent (HA) setting describing the demand side, with the standard New Keynesian (NK) type nominal rigidities in the supply side of the economy. Idiosyncratic risk and incomplete markets will give rise to precautionary savings, and the emergence of some borrowing-constrained, ‘hand-to-mouth’ households, which does not happen in the complete markets representative household setup. Search-and-matching (SAM) frictions in the labour market endogenise the otherwise ad-hoc idiosyncratic risk process: the business cycle will endogenously interact with precautionary savings through affecting unemployment risk. Thus an endogenous risk wedge emerges due to a lack of social insurance against unemployment and wage risk (incomplete markets), which is a key driving mechanism in the model.

He distinguished procyclical and countercyclical parts of the risk channel. As wages are procyclical, wage risk tends to be stabilising (in recessions the risk is lower since there is less to loose), but unemployment risk is obviously countercyclical which is destabilizing. According to Ravn, the countercyclical part of the endogenous risk wedge dominates, which will lead to amplification of shocks (destabilisation) in the following way. As demand contracts, unemployment risk goes up which in the absence of full insurance, prompts households to try to self-insure by increasing precautionary savings. However, this further depresses demand, worsening unemployment risk, too.

Due to the amplification, monetary policy might need to react more aggressively in order to stabilise the economy, which leads to the standard Taylor-principle being insufficient: the precautionary motive to save more (which is absent from complete markets RANK) needs to be offset by an intertemporal motivation (through interest rate cuts) to “dissave”. A so called ‘unemployment trap’ equilibrium emerges, combining low growth with high unemployment in the long run, which can be triggered by large negative shocks in the presence of both countercyclical risk and nominal rigidities.

The liquidity trap equilibrium features low but positive inflation, potentially explaining the missing deflation at the ZLB. Supply side improvements (modelled by positive TFP shocks) can be inflationary; by improving employment prospects, they reduce the precautionary saving motive (in RANK models these would be deflationary). Therefore, structural reforms might in fact help in escaping the liquidity trap. Another insight from HANK models is that with a full distribution of heterogeneous households (including credit constrained ones), Ricardian equivalence of fiscal policy fails, so it is very important how the government budget reacts to shocks in the economy and how it is distributed across households. Incomplete markets also imply that the social insurance against idiosyncratic risk provided by fiscal redistribution can play a large role in stabilising demand fluctuations.

Evi Pappa: “Fiscal Consolidation in a Low Inflation Environment: Pay Cuts versus Lost Jobs”

Pappa reviewed some of the literature about fiscal multipliers. Recently, much work has gone into deriving fiscal multipliers during liquidity trap episodes, where the effectiveness of fiscal policy might be largely enhanced. With interest rates stuck at the ZLB, an increase in government spending might have much smaller crowding out effects on investment and consumption, thereby leading to a much larger expansion of output. Conversely, fiscal consolidation in a liquidity trap can be especially harmful.

Pappa presented the results of a joint paper with her coauthors, which aims to contribute to understanding the effects of fiscal consolidations at the ZLB.

They examine how different types of fiscal consolidations affect output in an open economy context as motivated by the example of the euro area periphery, where most of the fiscal austerity after the euro crisis took the form of a reduction in the public wage bill (as opposed to cuts to government spending in the goods market, or tax hikes, which are the traditionally modelled fiscal instruments). To this end, they develop a two-country monetary union New Keynesian model with search-and-matching frictions in the labour market, in order to model unemployment, and compare public wage cuts versus cuts in public hiring (vacancies).  They find that away from the ZLB a fiscal consolidation creates a positive wealth effect, crowding consumption and investment in. The cut in the public wage bill (using any of the two instrument) leads to a reallocation of labour from the public towards the private sector, placing downward pressure on private wages and leading to an internal devaluation of the real exchange rate. The ensuing boost in competitiveness stimulates external demand, which together with the crowding in effect, actually leads to higher private demand and an output expansion, facilitating hiring and smaller unemployment. This is in contrast to traditional fiscal instruments. Wage cuts have better unemployment outcomes than hiring cuts.

In a liquidity trap scenario, weak private demand is constrained and unable to absorb the workers from the public sector, so private employment falls. At the ZLB, fiscal consolidation cannot be expansionary. Public vacancy cuts result in persistently higher unemployment, however, public wage cuts can overturn this effect in the medium run, by reducing labour costs and enhancing competitiveness. This is in stark contrast to the more harmful effects of traditional fiscal consolidations predicted in liquidity traps. Sensitivity analyses imply that this type of consolidation can be more successful if prices are relatively less sticky in the periphery than in the core countries, if labour mobility between sectors is relatively high, or if the public good is not productive.

Pappa concluded by outlining possible extensions of this model with heterogeneous agents (where fiscal policy can have markedly different effects due to “hand-to-mouth” agents, and lack of insurance against idiosyncratic risk); with unconventional monetary policy tools (such as forward guidance, or interactions with fiscal policy); or with fiscal coordination within the monetary union.

Discussion by: Axelle Ferriere – Heterogeneous Effects of Fiscal Policy

Ferriere presented her own Ademu research on this topic which looks at how fiscal spending multipliers depend on the tax distribution and finds that government spending is more expansionary if it is financed by more progressive taxes. After showing estimations by a local projection method, she outlined a Heterogeneous Agent New Keynesian (HANK) model with indivisible labour, also confirming the result. The crowding out effect of distortionary taxes fades when it only affects richer households, who typically have a lower labour supply elasticity. This channel is in addition to the other main channel, present in most HANK models, which works through heterogeneity in marginal propensities to consume (MPC). Tax increases focused on lower MPC rich households also tend to hurt demand less, as they are likely to smooth out temporary changes in disposable income (unlike poorer, more borrowing constrained households).

Discussion by: Kurt Mitman

Mitman suggested Bewley-Hugett-Aiyagari style heterogeneous agent incomplete models were a workhorse model in public economics due to their ability to generate realistic distributions of wealth, earnings and MPCs. Somewhat disconnected, in monetary economics the workhorse model was the New Keynesian model, which due to nominal rigidities made output demand determined, gave a meaningful role to monetary policy and was able to match aggregate data. More recently, the research frontier is about combining these two model families in what is known as HANK models, which he dubbed ‘AiyaGalí, following the main contributors to the two preceding frameworks. Among the advantages of this approach, Mitman listed the fact that they overcome price level indeterminacy in a liquidity trap, produce a well-defined fiscal multiplier at the ZLB, and allow for arbitrary combinations of monetary and fiscal policies.

He outlined the key transmission channels of fiscal policy in HANK models: private consumption is affected by the traditional intertemporal substitution channel, through altering the real interest rate and the savings decision of Ricardian households. While in RANK models, this is the main (direct) channel, in HANK models transmission relies more heavily on indirect general equilibrium effects which depend on the entire distribution of households. The path of wages, transfers and hours, and their potential redistribution among agents can have substantial aggregate effects through the presence of non-Ricardian, borrowing-constrained agents who might not be able to follow consumption-smoothing as prescribed by intertemporal substitution motives. He pointed out that the crowding out effect on investment is effected by this distribution.

In such a framework Ricardian equivalence breaks down, therefore it matters how fiscal spending is financed. He showed impulse response from such a HANK model, where a tax financed increase in spending crowds out consumption, while a deficit-financed one leads to much higher fiscal multipliers, due to the rise in the disposable income of high-MPC, hand-to-mouth households (higher wages due to the demand increase are not taxed away immediately). He presented results from his own work, showing how the equilibrium is uniquely determined by fiscal policy even in the presence of the ZLB – somewhat in contrast to the multiple equilibria presented by Ravn at the beginning of the session.

Morten Ravn’s response:

Some people from the New Keynesian camp insist that similar implications to that of HANK models can be arrived to in a much simpler framework as well (e.g. omitting idiosyncratic risk, and a full distribution of households). He suggested this is no longer the case when SAM is added to the model, because of the importance of the endogenous risk channel there.

Ramon Marimon asked Axelle Ferriere about the multipliers in her presentation, whether they are even larger if only the progressivity is increased, but the level of taxes (needed to finance higher spending) is not. Ferriere agreed that the expansion is larger in this case.

SESSION 5: Presentations of PhD and Postdoc Researchers’ contributions to the Ademu project

Presentation:

  • Anna Rogantini Picco | European University Institute 
  • Carlo Galli | University College London 
  • Carolina López-Quiles Centeno | European University Institute
  • Joao B. Duarte | University of Cambridge Johannes Fleck | European University Institute
  • Joachim Jungherr | Institut d’Anàlisi Econòmica (CSIC), MOVE, and Barcelona GSE

Discussants:

  • Mathias Dolls | Ifo Institut
  • Gaetano Gaballo | Banque de France and Paris School of Economics
  • Mike Mariathasan | KU Leuven
  • Thomas Hintermaier | University of Bonn
  • Juan Dolado | European University Institute
  • Tryphon Kollintzas | Athens University of Economics and Business

Anna Rogantini Picco: International Risk Sharing in the European Monetary Union”

Picco began by stating the research question of her joint paper with Alessandro Ferrari: has adoption of the euro had impact on risk sharing across the euro area? The paper assesses how a common currency alters shocks absorption capacity – how the euro adoption affected the ability of the euro area member states to share risk. This is done by creating a counterfactual dataset of macro variables for the euro area countries for the scenario of no adoption of the common currency using a synthetic control method of Abadie & Gardeazabal (2003). The absorption capacity in different channels is estimated pre- and post-euro with actual and synthetic data. Risk sharing channels are identified using Asdrubali et al. (1996) method and include international capital markets, international transfers, public savings and private savings channel. The effects of euro adoption are identified using difference in difference estimation. The analysis is performed for 22 countries – 11 euro area member states and 11 control group countries. They show that the euro adoption has decreased shock absorption capacity of the member states. However, the effects are heterogenous: there was no significant change in absorption capacity in the core countries, while a significant decrease in the periphery which drives the final result. The decrease in risk sharing capacity was mainly via a sharp reduction in private risk sharing channel.

Discussion by: Mathias Dolls

Dolls remarked that the paper is well-written and contributes to the ongoing debate about institutional structure of the EMU and fiscal risk sharing. Commenting on the results, he pointed out that the reduction in consumption smoothing happened only in the periphery countries, suggesting the authors should convince the reader that the control group they use is a valid counterfactual for longer periods. It is possible that there are some structural breaks in the dataset which affects the counterfactual. The paper should be also put into more perspective against the previous literature. One of the explanatory factor of the results claimed by the authors is higher GDP growth (pre-crisis) and its volatility generated by the euro adoption. However, it is not clear why this should be an explanatory factor. Actually, the unsmoothed fraction of GDP was even larger in pre-crisis period. The authors should provide other reasons behind their findings.

Anna Rogantini Picco’s response:

Picco said that the weights use in synthetic control method do not change over time, so that structural breaks should not matter. Excluding crisis period as a robustness check indicates that there was even less smoothing in pre-crisis period after the euro adoption. Higher GDP volatility is a potential driver of the results but certainly not the only one.

Floor discussion:

Árpád Ábrahám (EUI) said he was surprised that the euro adoption decreases private savings risk sharing channel. This is puzzling since it is difficult to think of barriers that euro introduces for using private savings for risk sharing.

Carlo Galli: Is Inflation Default? The Role of Information in Debt Crises”

In his joint work with Marc Bassetto, Galli indicates that countries that borrow in their own currency are more resilient to debt crises because they are able to monetise their debt so that government bond prices react less to bad news. The ability to print money locally helps to avoid default risk, but comes at a cost of higher inflation and higher inflation expectations. Inflation, however, is more sluggish, while sovereign spreads move much quicker. Debt crises require coordination: among the bond holders anticipating spikes in spreads in case of foreign currency debt and among price setters anticipating domestic escalation of inflation expectations in case of domestic currency debt. The authors perform comparative statics and compare two economies with different currencies. The price of debt is based on government fundamentals but model economy features noise trades so that prices are not fully predictable. Agents have some prior beliefs on repaying ability. Price responsiveness to government fundamentals is larger if debt in denominated in foreign currency. In case of domestic currency the price responsiveness function is flatter. Domestic currency debt plays an insurance role in the economy.

Discussion by: Gaetano Gaballo

Debt in domestic currency is more resilient to news shocks than denominated in foreign currency, agreed Gaballo. Price setters (inflation pricers) are less informed agents than those who price bonds (default pricers). In the model, primary market traders need to guess what secondary market agents know. Such ignorance promotes liquidity value of bonds but comes at cost. The model is based on extensive margin and focuses on how many people enter the market. He suggested it should be based on intensive margin. Such an alternative model is simpler and delivers the same results. The lower the precision of information of the marginal agent at final stage, the lower the reaction of prices to news and depending on who is the marginal agent at the final stage, the price of debt is more resilient to news when the debt is in domestic currency. The authors, he said, should justify why they use the extensive margin instead of intensive margin. None of the results relies on the extensive margin approach.

Carolina López-Quiles Centeno: Deposit Insurance and Bank Risk-Taking”

López-Quiles Centeno presented her joint work with Matic Petricek, pointing out the tradeoff related to deposit insurance – on one hand it eliminates the risk of bank runs, but on the other hand disincentives from monitoring banks which results in a moral hazard problem. The authors ask how deposit insurance affects bank risk-taking. The contribution of the paper is cleaner identification thanks to studying a change in regulation instead of comparing different financial systems and measuring risk-taking through loan application data instead of balance sheet data as it was in the previous literature. The paper focuses on deposit insurance system in the US where the statutory limit of deposit insurance coverage was increased in 2008. Banks in Massachusetts, however, are immune to this change as they have unlimited coverage and can be used as a control group. The authors use quarterly balance sheet data and mortgage application data. A propensity to originate the loan given the loan risk characteristics (measure of risk taking) is estimated using a linear probability model regression on loan to income ratio for every bank and every year. A difference in difference model is used to estimate the effect of the treatment (increased deposit insurance coverage). Overall, increase in deposit insurance does not increase risk taking (at least on the intensive margin). The result holds also if one controls for the crisis period. Hence, there is no policy tradeoff – increased coverage eliminates bank runs and does not increase risk taking.

Discussion by: Mike Mariathasan

Mariathasan pointed out that insured depositors monitor less, but explicitly uninsured creditors monitor more. The control group of the paper are state-chartered savings banks in Massachusetts insured up to unlimited amounts by the Deposit Insurance Fund (DIF). Higher coverage limit is a positive shock for DIF and affects DIF-covered banks (control group) and the liabilities of the DIF dropped after the Lehman Brothers failure – it is not implausible that fully insured banks respond differently to confounding events than partially insured. Some banks opted into Massachusetts charter after the Lehman failure and this is event is not exogenous. The authors should use regression discontinuity design (RDD) following McGowan and Nguyen (2018) and study the issuances around state border. It is crucial to identify the effect of unlimited coverage in Massachusetts and clarify the mechanism at play.

Carolina López-Quiles Centeno’s response:

López-Quiles Centeno replied that any differences between banks in Massachusetts and other banks in terms of prudence are taken away by the difference in difference approach. She welcomed the suggestion to use regression discontinuity design and to study the effects on the issuances around the border.

Joao B. Duarte: “Why is Europe Falling Behind? Structural Transformation and Services’ Productivity Differences between Europe and the U.S.”

Duarte presented his joint work with Cesare Buiatti and Luis Felipe Sáenz, showing labour productivity in Europe (GDP per hour) relative to the US. Over time it was growing, there was some convergence and then the growth stopped and turned into a decline. The aim of the paper is to explain this phenomenon. Using the World KLEMS data the authors show how labour shares and sectoral labour productivity changed over time and then using a structural transformation model try to explain aggregate labour productivity differences. In the model relative changes in labour depend on changes in productivity. Sectoral productivity is decomposed into inputs – physical capital, information and communication technology (ICT), and total factor productivity (TFP). A calibrated version of the model explains labour reallocation across sectors: labour has reallocated from agriculture and manufacturing mainly towards business services and health services. It also shows that Europe is less productive in wholesale trade, transport and storage, business services. These are the main culprits for Europe falling behind the US. Decomposition of the productivity gap suggests that sectoral relative productivity gap is mostly a result of sectoral TFP differences.

Discussion by: Thomas Hintermaier

Hintermaier indicated that a multi-sector closed economy model is applied and it shows that the wholesale and retail trade and business service are mostly responsible for Europe falling behind. He pointed out that there are no intertemporal decisions in the model, the analysis focuses on long-run trends, agents have non-homothetic CES preferences, model features sector-good specific income elasticities, production is linear in labour, and the economy is closed. He suggested to open the model economy – European countries are quite open and this may introduce possible interaction effects determining factor allocations across sectors. Trade patterns and the terms of trade may interact with sectoral productivities. The decomposition of sectoral labour productivities involves physical capital and ICT to explain detected differences, but in the theoretical model there is no capital nor any intertemporal choice. Frictions related to capital might actually matter, while they are missing in the model. If we assume that capital is there implicitly, it is fixed while changes in capital might explain persistent productivity differences.

Joao B. Duarte’s response:

Duarte replied that indeed capital in the model is implicitly embedded in labour productivity. Any institutional frictions are captured by TFP differences, but still they might work through capital which is missing. The authors could not obtain the data on sectoral institutional regulation differences.

Johannes Fleck: “Income Insurance in Fiscal Federations: Evidence from the USA”

Fleck presented his joint work with Chima Simpson-Bell, mentioning the reasons why federal and not regional governments should provide insurance against income risk and the limitations federal policies face. The paper asks how much public insurance against earnings income shocks is provided in the US. In the model federal government faces states-specific prices, policies and income distributions. The authors build a microsimulation model of US federal and state policies which considers federal and state income taxes and transfers, estimates changes in household tax liabilities and transfer entitlements as labour income changes and includes a measure for the ’real value’ of the associated insurance. The model overcomes data constraints and decomposes the insurance of federal and state policies. Income shock can be absorbed by public and private insurance and change in consumption, but in the model the private insurance channels are shut down. In the model, family characteristics of a prototype family are fixed. In the experiment the family receives 45 subsistence baskets (each includes minimum monthly food expenditure plus rental payment) and five of them are taken away. The authors calculate changes in real disposable income and insurance values and there is a considerable heterogeneity between states. In Idaho, people lose 4.6 out of 5 baskets, while in Wisconsin only 2.69. When the level of combined insurance is low it comes almost completely from the federal government, but there is a limit to what federal can do. If more insurance is provided it is done by the state. The next step would be to study what is driving the policy heterogeneity.

Discussion by: Juan Dolado

Dolado said the paper makes an important contribution, providing a calculator to compare the level of income insurance received by households of given characteristics in different states. He asked why the authors assume that the federal government takes state policies as given – it should be the other way round. It is also problematic that the paper focuses on labour income shocks and unemployment insurance is not considered. Welfare programs are not aiming at consumption smoothing but rather to ensure decent welfare level, so if they do not achieve the former it is not their failure. In the model, labour is fixed, while labour mobility accounts for 50% of the long run adjustment of state specific shocks. He also queried the relevance of the research question as it is posed, since what matters is not the equality of outcomes but equality of opportunities. It can be the case that the conclusion is the other way round – when state insurance is high, federal insurance is low. To obtain the paper result (a negative relationship between total insurance and federal contribution) a negative correlation between state and federal transfers is needed.

Johannses Fleck’s response:

Fleck replied that subsequent work will focus on how state and federal policies respond to each other. Unemployment insurance has a small role quantitatively and is hard to impute in the current model setting.

Joachim Jungherr: The Long-term Debt Accelerator”

Jungherr presented his work with Immo Schott, pointing out that cyclical fluctuations of output are sizeable. The question is why output fluctuates so much. This can be because of large exogenous shocks or small endogenously amplified. The paper focuses on the latter – amplification through credit markets. In existing models debt is only short term debt, while most firms debt is long term debt. The paper introduces long term debt to a standard model of firm financing and production and identifies a novel mechanism of financial amplification: negative shock triggers adverse feedback loop between low investment and high credit spreads. In the model, firms make decisions in the presence of existing debt from the past and internalise effects of their actions on the value of newly issued debt, but not on the previously issued debt. If the ratio of previously issued debt to total debt is high, firms internalise a small fraction of expected default costs and choose high leverage and high default risk. This triggers an amplification mechanism: negative shock reduces investment, total debt drops, but previous debt is still high, ratio of previous debt to total debt increases, higher leverage and default risk are chosen, credit spread increases, investment becomes more costly and the loop closes. The mechanism is studied in a two period model where long term debt from the past is exogenously given. Firms cannot commit to repay and lose a fraction of assets if they default. Default threshold is high if leverage is high.

Discussion by: Tryphon Kollintzas

Kollintzas said the authors show that long term debt leads to amplification effect of exogenous shocks effects over the business cycle. The amplification is about 150% over standard real business cycle model (RBC) and negative shocks have more effects than positive ones. Increase in long term debt raises expected future costs of default due to a lack of commitment leading to higher spreads. The model features perfectly elastic demand for output. The analysis focuses on symmetric and Markov perfect equilibrium. Prices of a short term and long term debt depend only on firm’s current state. Since the model is not a full RBC, some restrictions are needed to solve it, for instance the capital rental rate must be fixed (perfectly elastic capital supply). But decreasing productivity reduces the interest rate since marginal product of capital goes down and this would reduce the amplification effect by partially offsetting an increase of credit spreads. A full model with all interactions should be developed and the entrepreneurs should be a part of the model household.

SESSION 6: Banking Union and the ECB

Presentation:Hugo Rodriguez | IAE-CSIC and Barcelona GSE

Discussants:

  • Jean-Pierre Danthine | Paris School of Economics
  • Emiliano Tornese | Deputy Head, Resolution and Crisis Management, European Commission
  • Roland Straub | European Central Bank

Hugo Rodriguez: “Banking Union and the ECB”

Rodriguez summarised eight Ademu papers (from a collection of over 120) related to the Banking Union and the ECB, taking both an economic and legal perspective. He analysed the paper by Monti and Petit (2016) which discusses the legal basis of the European Banking Union (EBU), discretion in supervisory standards and various overlaps between the ECB and the EBU, the paper by Amtenbrink and Markakist (2017) which focuses on the accountability arrangements for the ECB in the framework of the EBU, lack of clear criteria against which the ECB performance in the area of banking supervision would be assessed, as well as a gap in terms of the ability of the European Parliament to assign consequences to the ECB’s conduct. He summarised the paper by Jungherr (2016) which studies the consequences of bank opacity and shows that strategic behavior reduces transparency and increases the risk of a banking crisis.

He went on to discusses the paper by Adao and Siva (2016) related to monetary stability issues which shows that firms’ cash holdings increased from 1980 to 2013. This made the interest rate channel of the transmission of monetary policy more powerful thus enhancing powers of stabilisation. Next was the paper by Gaballa and Marimon (2016) related to financial stability, which shows that credit crises can be self-confirming equilibria which provide a new rationale for credit easing policies such as TALF. The theory is consistent with the micro data on the ABS auto loans in the US. The paper by Smits (2017) relates to both SSM and monetary stability and compares the allocation of powers in the SSM and the Eurosystem related to reviewability, juridification and supervisory liability. The paper by Yiatrou (2016) tests the credibility of the bank resolution regime in the European Union and studies the consequences of insufficient resolution funds, while showing that fully credible system could be too expensive to achieve.

Finally, Rodriguez presented an overview of his own Ademu working paper (2016) which studies the effects of narrow banking (100% reserve requirements). This proposal seems radical and allegedly reduces the overall amount of liquidity which results in huge efficiency costs. The proposal means a separation of financial institutions into safe ones being only deposit keepers and very risky institutions. There are also cheaper alternatives to avoid bank runs, such as deposit insurance. Including a realistic description of modern monetary system model radically changes the predictions of the traditional model. In particular, reserves do not compete with bank loans and there is no need to separate financial institutions. Reserves have an indirect effect on bank intermediation through cost of producing loans and deposits. If central banks remunerated required reserves at refinancing rate, then narrow banking would have no effect on liquidity creation and bank intermediation. He concluded that there is a potential for improvement in the EBU/ECB design for instance by new policy measures such as credit easing policies and narrow banking. What needs to be analysed further is the role of ECB/EBU design in supporting polarisation, the dynamics of banks’ balance sheets and the role of payment flows as a joint determinant of solvency and liquidity risks.

Discussion by: Jean-Pierre Danthine

Danthine agreed that narrow banking can be seen as an alternative to the European Deposit Insurance Scheme and said Rodriguez’s paper shows it may not be as radical as first thought. It was though, he said, a big jump into the unknown, especially if narrow banking is not strictly necessary. It is true that customer deposit is the cheapest input for bank credit, but with narrow banking the cost of credit will increase. For this not to happen, the interest rate needs to adjust so that profitability of banks is maintained. Does this imply that the monetary policy would be different? Such issues need to be taken this into account in policy design. Narrow banking fundamentally changes bank activities: banks effectively become mutual funds. This cannot be done at zero cost. Narrow banking fundamentally changes the central bank functioning – their role would be to give collateralized credit to banks. But commercial banks are the ones who should create money, there should not be a committee at the central bank deciding how much credit to produce.

Discussion by: Emiliano Tornese

Giving a lawyer’s opinion, Tornese pointed out that a resolution framework without sufficient funds is less credible. The loss of credibility may affect small banks, but not the ones labelled as “too big to fail”. This should be addressed in the analysis – how small entities are affected. The EBU is missing its third pillar, while financial backstops are crucial to provide necessary stability. We need a system which addresses both tail risks and moral hazard and provides the same kind of protection to all depositors. The European Commission discusses with the member states the design of the new architecture to avoid the risk of sharing losses. The system first needs to provide liquidity, but subsequently also the ability of loss absorption. Total loss absorption capacity of different proposals should be evaluated. The ultimate design of the EBU is still a matter of an ongoing debate.

Discussion by: Roland Straub

Straub indicated that banking union and monetary policy transmission – from policy rates to bank lending rates – are closely related. What is really needed is a well-functioning transmission mechanism. Price-based and quantity-based composite indicators of financial integration of the euro area were improving before the crisis but afterwards started deteriorating. Financial integration can indeed evaporate very easily if it is not supported by right policies. Crisis seriously impaired the ECB transmission mechanism and it faced several challenges: the lack of resolution frameworks in place; a negative feedback loop between insolvent governments and banks holding their assets; the measuring of illiquidity and insolvency – solvency was measured on the national level, while liquidity at the European level. He noted how the pillars of the EBU helped to deal with these problems. The Single Resolution Mechanism and the Single Resolution Fund deal with the first challenge but the importance of bank recovery and resolution directive (BRRD) is underestimated. The European Stability Mechanism providing fiscal backstop for governments deals with the second challenge on the states side, while the solvency checks under the Single Supervisory Mechanism done by the ECB with the third one (and the second one on the side of banks). In addition, the balance sheet policies of the ECB helped to improve both credit ratio and financial integration ratios. This clearly shows that financial integration needs to be backed up by the right policy.

SESSION 7:
Macroprudential policies in financial markets

Presentation:

  • Radim Bohacek | CERGE-EI
  • Richard Portes | London Business School and CEPR

Discussants:

  • Charles Bean | London School of Economics and CEPR
  • Juan Francisco Jimeno | Bank of Spain and CEPR
  • Francesco Molteni | European University Institute

Radim Bohacek: “Macroeconomic and Financial Imbalances and Spillovers”

Bohacek positioned his research within the Ademu project as concerning macroeconomic and financial imbalances and their cross-border spillover effects. He gave a brief introduction to macroprudential policies: to ensure financial stability by containing systemic risk, with the help of supervisory and regulatory instruments. Most macroprudential restrictions are ex ante, such as leverage restrictions during good times. However, the benefits of such regulation also entail costs, and he outlined a model from one of his Ademu working papers to analyse this trade-off – a heterogeneous agent model with incomplete markets and forms investing in risky projects which need financing.

Workers differ along the dimensions of skills and assets. A social planner with full information would implement perfect risk-sharing and skill-based efficiency, which allocation can in theory be implemented in a decentralised manner as well. However, by introducing frictions this is no longer the case. In particular, skills are unobservable and this imperfect monitoring leads to adverse selection and moral hazard problems, which introduces the additional incentive and threat-keeping constraints to the problem. The possibility of default is another friction which limits the feasibility of the first best solution. This leads to partial risk-sharing and misallocation of resources through precautionary savings: this entails efficiency and welfare losses. These frictions can be mapped into a single collateral constraint.

The model results illustrate the key trade-off facing macroprudential policy. With zero restrictions and no limits on borrowing, there are no losses due to limited risk-sharing: agents are free to trade in financial markets – however, losses due to imperfect monitoring are the highest in this case as leverage can be too much in a crisis. This is because of an aggregate demand externality: individually rational borrowers can undertake excessive leverage from a social point of view as they do not take into account general equilibrium effects. Greater ex ante leverage leads to a greater ex post reduction in aggregate demand and a deeper recession. If macroprudential regulation limits leverage, it also limits risk-sharing which involves efficiency costs, but then losses due to imperfect-monitoring can be mitigated when the crisis hits (even if it has a low probability). In sum, in good times, restricting leverage is costly as it limits the efficient allocation of resources to their most productive use; on the other hand, accumulation of assets prevents excessive deleveraging during a recession. There is a trade-off between efficiency and stability, between short term certain costs and long term uncertain benefits.

The key message is that general equilibrium adjustments are important, which might not always be internalized by individual actors. Due to these complex and dynamic linkages, macroprudential policies are better fitted to address these issues than microprudential regulation.

Richard Portes: “Interconnectedness – imbalances, capital flows and risks in the shadow banking system”

Portes noted how the unwinding of large domestic and international macro imbalances prompts a large declines in asset prices, a rise in default risk and a fall in capital market liquidity. He focused on the role of the shadow banking system and its interconnectedness with traditional financial intermediaries in amplifying and propagating risk, leading to contagion-like spill-overs which are transmitted across sectors and national borders. This type of interconnectedness is the essence of systemic risk, and is best addressed by macroprudential (as opposed to microprudential) regulation. He explained that in the case of small shocks a more densely connected financial sector helps in risk sharing and enhances financial stability by spreading the risk out. However, for large enough shocks even after spreading risk out, several actors can be severely affected, which is why, by supporting the transmission and propagation of the shock, dense interconnections contribute to systemic risk and contagion.

There is an apparent gap in the understanding of the precise interconnections within Europe’s financial system, especially with regard to shadow banks. In their latest project, Portes and co-authors set out to map the exposure of European banks to shadow banking entities to facilitate the design of appropriate macroprudential policies. Exposure can be direct (counterparty relationship or ownership)  or indirect (entities with common exposures, collateral chains or step-in risks, where a bank provides implicit guarantees for off-balance sheet items, like SIVs). They use granular data, collected by the European Banking Authority and ask which EU banks are exposed to what type of shadow banking entities, also accounting for where these shadow banks are domiciled. Results indicated that EU banks have lots of exposure towards shadow banks outside the European Union, and therefore out of its regulatory reach, which is in itself a risk. 27 percent of this exposure is towards US entities.

According to Portes, normal stress tests are difficult to conduct for shadow banks as they do not take into account direct contagion through exposures or indirect contagion through deleveraging and fire sale externalities, which depress prices. Cross-border spillover effects are also typically ignored. When identifying potential contagion paths, he noted that if individual banks have a sufficiently diversified exposure towards shadow banking entities, large levels of average diversification are likely to lead to high degree of overlap among bank’s exposures which means that different EU banks have a common exposure to the same shadow bank. These common sources of vulnerability can increase systemic risk. System-wide vulnerabilities and systemic risk (ignored by today’s stress tests) should be therefore a key focus for macroprudential regulation. He also calls on academics to use the available data to assess these phenomena and help designing better policies.

Discussion by: Charles Bean

Bean found the presentations to be contrasting but complementary contributions. Bohacek’s paper, he said, nicely exposes the trade-offs faced by macroprudential policy with some minor caveats. Leverage might get too high in the good state of the model since transition to the bad state is in essence a zero probability event; this is a model of loans to small and medium sized enterprises and how these loans can be restricted, while in reality a large portion of lending went to households in the form of mortgages. This would necessitate the use of LTV and LTI ratios, and would also raise distributional issues. He also pointed out how the paper uses the phrases ‘leverage’ and ‘collateral’ interchangeably (due to the simple setup of the model), while in reality these concepts are distinct. He claimed that time-invariant margin requirements could be made time-varying to provide a more realistic model. The inclusion of countercyclical capital buffers could improve the paper, he said.

The presentation by Portes, said Bean, focuses on the role shadow banks play in determining systemic risk in the financial sector. Mapping out the network of banks’ exposures to other financial intermediaries is very useful and helps policymakers gain a clearer picture. In addition to the mapping of asset side exposures in the paper, liability side exposures could also be important. He asked whether the sort of survey which was used to analyse this network of exposures is effective and concluded that the paper presents a lot of interesting descriptive statistics, but ultimately these should be used to help designing proper systemic stress tests and/or macroprudential policies.

Discussion by: Juan Francisco Jimeno

Jimeno described the current state of macroprudential policy making, defining the objectives of macroprudential policies as controlling systemic risk and ensuring financial stability, in order to be able to manage the economic cycle similarly to traditional macro policies. As an instrument, it can use capital buffers and leverage ratios, and it has to account for potential spillover effects across agents, countries and time. It also faces the usual constraints time inconsistency and political economy considerations, like other policy branches. The theory behind macroprudential policy starts from Modigliani-Miller which can serve as a benchmark to all the frictions we have in the financial system: moral hazard, limited liability, collateral constraints, etc. These frictions matter because they prevent the efficient allocation of resources in the economy, and can lead to boom-bust cycles, having implications for the transmission of shocks, welfare and distributional issues. The research into these questions is still relatively in its infancy: DSGE models introduce one friction at a time and one instrument to address it, but ad-hoc modelling might fail to capture interactions with other frictions in the goods/labour markets, or might run into computational difficulties. In practice, designers of macroprudential policies face other pressing challenges including deciding who is in charge of it, how it should interact with other policies and whether it should focus on crisis prevention versus crisis resolution. He noted there are many things not fully understood about macroprudential policies as opposed to the extremely complex microprudential regulations.

Discussion by: Francesco Molteni – Margin regulation in financial markets

Molteni presented the main findings of his own work, which he said was motivated by the objectives of financial regulation to reduce the procyclicality of credit and leverage. In turn, leverage is highly connected to collateral requirements. In particular, the margins or ‘haircuts’, which are the difference between the value of the collateral and the amount of funding investors can take, are inversely related to leverage. Understanding how they work or how they should be regulated is of crucial importance when designing macroprudential policies. Haircuts usually increase together with credit risk, which makes borrowing more difficult. The loss of funding can trigger fire sales, which further pushes up yields, and makes haircuts even larger, leading to a vicious cycle. In order to evaluate the transmission of these haircut shocks, he builds a DSGE model. The results indicate that yields increase after such a shock which results in falling output and inflation. The procyclicality of haircuts reinforces the amplitude of asset price fluctuation and their transmission to the real economy. Financial regulation should aim to reduce this procyclicality, for instance, by setting a fixed numerical floor or time-varying haircut buffers. There are also proposals for the creation of safe Euro bonds, which could be used as collateral in the repo market with stable haircuts.

Floor discussion

Ramon Marimon (EUI) asked what implications Brexit has on this discussion. Charles Bean replied that macroprudential regulation in general requires international cooperation. The US is also outside Europe, so the UK would be just another country with which to deal, but fundamentally not much would change. It is true, however, that in practice, Brexit could indeed make cooperation harder in the future.

SESSION 8: Legal limits to EMU reform: assessing the options

Presentation:

  • Giorgio Monti | European University Institute

Discussants:

  • Armin Steinbach | European University Institute
  • René Smits | University of Amsterdam
  • Tuomas Saarenheimo | Finnish Ministry of Fincance

Giorgio Monti: “Legal limits to EMU reform: assessing the options”

Monti outlined four proposals from the European Commission’s 2017 Roadmap: bringing the Fiscal Compact under EU Law; amending the ESM Treaty and bringing it under EU Law; redeploying and expanding the budget; and creating a position for European Minister for Economy and Finance.

Focusing on the ESM and the proposals connected to it, he put these reform proposals into the context provided by the recent book of Brunnermeier, Landau and James – The Euro and the Battle of Ideas – contrasting the so called German and French visions. The former advocates for better discipline and regulation ex ante, while the latter pushes for more solidarity once a member state gets in trouble. With respect to the ESM the German vision translates into having more power over Member States overseeing compliance with the Fiscal Compact and the Stability and Growth Pact, and it would be a technocratic institution which could also serve as a fiscal backstop for the Single Resolution Framework. In contrast, the French vision sees the role of the ESM closer to that of the IMF which could provide the resources in order to avoid destructive austerity policies.

The Commission’s proposal is not as ambitious as these two visions outlined above, but stands in the middle. It also raises some legal issues: in particular, the Commission aims for an institutional reconfiguration whereby the ESM Treaty would be integrated with the EU Treaties. Unifying a messy patchwork of separate packages under EU Law can increase transparency and accountability (to European Parliament and Council) relative to an international treaty. However, it is not clear that there’s a legal basis to do so: the Commission has the power to legislate only if it is necessary to attain the Treaty’s objectives, but it can be argued that the ESM has fulfilled its role also in its current form. Integrating it into the EU institutional framework does not seem to be crucial for achieving these objectives.

It is not clear what the term ‘unique legal entity’ means. Under EU Law the ESM’s decision to grant support to a Member State would still be subject to the Council’s approval which would render it an agency of existing EU institutions. Or would it be a fully independent institution akin to the ECB? Monti concluded that based on the above, it is not clear that bringing the ESM under EU Law is necessary: reforming the ESM Treaty could achieve similar outcomes.

Turning to Ademu’s proposal for a European Stability Fund, he said the establishment of the ESM was contentious to some, who claimed that it amounts to bailing countries out, violating the EU Treaties’ “no-bailout” rule (Article 125 of the TFEU). However, this might be sometimes necessary to safeguard the financial stability of the Euro Area so in this sense is not motivated by solidarity or by helping the bailed-out (as established in the 2012 Pringle case at the European Court of Justice).

Help by the ESM also needs to be subject to strict conditionality; this is in contrast with Ademu’s ESF proposal where there are no Memoranda of Understanding, but instead a long-term contract operating with flexible repayment rates, the terms varying with the beneficiary’s economic performance.

A softening of the Pringle criteria might be required for this proposal to materialise. Eligibility could be restricted to well-behaved states (not any state at any time, as proposed by Ademu). In terms of conditionality, Monti asked whether the enforced self-discipline in the ESF long-term contract is enough to be sure that the beneficiary’s policies will sustain the interests of the EU? It can easily be the case that outside EU force might be necessary to “rescue member states from their own bad policies” – he cited the utilities liberalisation. Finally, he raised the issue of the impact the ESF would have on the rest of the EMU governance. Do we still need a Stability and Growth Pact? Will the ESF be incorporated into an EU budget?

Discussion by: Armin Steinbach

Steinbach raised several points with regard to the Unemployment Insurance Scheme. There is limited EU competence for social policy and it is hard to harmonise national systems. The no-bailout clause could be violated unless the unemployment shock can be sold as an emergency situation, crucial for the stability of the EMU, although it could be argued that the UIS is insurance and not a transfer. Country-specific taxation could avoid moral hazard and provide incentives to member states, but coordinating taxes across member states might be problematic. Existing funds can serve as a starting point for such an European insurance fund, but labour laws would need to be harmonised. In general, the more harmonization is necessary or the larger the transfers are, the likelier it is that legal problems will arise.

Shifting from a sanctions-based to a reward-based system, for example giving money for reforms, raises the question of how to value structural reforms and which reforms should be funded. There is also the issue of windfall gains, whereby a country could obtain rewards for reforms which would have been undertaken anyway. The legal implementation is also a consideration: whether it is a contractual agreement, or voluntary or a Memorandum of Understanding.

Discussion by: René Smits

Smits noted the challenges of ESM reform (Article 352 TFEU), citing lack of democratic legitimacy and judicial control outside the Treaty, and referred to the need for a euro area budgetary stimulus. Solidarity is a core EU value, he said. Technical approaches need technocratic approaches but legal procedures are also important, always keeping EU values in mind. He suggested a fund that could fulfil the role of central fiscal stabilisation for the euro area and concluded by saying that the North/South divide will make or break the EU.

Discussion by: Tuomas Saarenheimo

Saaranheimo pointed out that he was an economist at a legal panel and would focus on the broader issue of democratic accountability within the EU, and how it should be taken into account during institutional reform and crisis management. He raised the issue of the purpose of legal constraints: is it to limit populist impulses or to control majority power? In his opinion, if there is a strong political will to do something, in the end ways will be found to provide a legal basis for it. That is not to say that the EU, its agencies or the Member States have done anything illegal, but we are sure to be in uncharted territories.

The issue of coordination is sometimes related to the EU’s power of being able to sanction Member States; they can lose access to EU structural funds if they don’t comply with criteria or recommendations. But it is not clear where the line lies between these being a coordination device of the EU and these becoming orders for sovereign nation states. The regulations issued by the European Commission operate almost completely outside the reach of democratic accountability. Decisions about public spending and social policies are the most political decisions a country can take, and if they are outsourced to the Commission, then what remains of the domestic political space? Euroscepticism is bound to rise. He was not critical of the Commission, saying it did a good job in operating the administrative framework during the euro crisis, but reiterating that  these issues need to be addressed. He noted that the distinction between sanctions and benefits does not make much sense during political discourse as the withdrawal of a benefit can be seen as a sanction.

When a country needs financial assistance, the Commission negotiates a program with a country, and it might dictate policies which affect millions. But why should a country which runs out of money also lose its sovereign right to decide? In his opinion, this approach is problematic and the former Greek finance minister, Yannis Varoufakis is not completely without justification when he complains about unelected technocrats interrogating elected ministers. As an alternative, Saarenheimo suggested that bailout programmes should not grant the EU/ESM any powers which are not in the Treaty and should only be interested in the desired outcomes. Ownership should be returned to the Member States.

Floor discussion

Árpád Ábrahám (EUI), one of the authors of Ademu’s ESF proposal, clarified that the ESF has no rights to tell a country what to do, but rather to provides incentives through the ex post/permanent conditionality of its long term contract. We have to think about both the ESF and the EUIS as an insurance as opposed to a bailout or transfer, he said. The design of these programmes ensure that participants are paying for this service during good times. He suggested the Stability Fund should have a status similar to that of the ECB in order to avoid the politicisation of the fund. Ramon Marimon (EUI) added that harmonisation required for the EUIS is not impossible, and there can be Pareto-improvement for all countries from policy coordination.

Giorgio Monti concluded that it depends on the design. States going to the ESM might lose credibility, but if conditionality requirements are tough enough, they may actually gain credibility.

SESSION 9: A New Fiscal and Monetary Framework for the EMU?
The EU Presidents’ roadmap in 2018

Moderator:

  • Xavier Vidal-Folch | EL PAIS

Presentation:

  • Ramon Marimon | European University Institute

Panel participants:

  • Joaquín Almunia | Former Vice President and European Commissioner for Economic and Monetary Affairs
  • Roel Beetsma | University of Amsterdam and European Fiscal Board
  • Marco Buti | European Commission, Director-General DG Economic and Financial Affairs
  • Paivi Leino-Sandberg | University of Helsinki
  • Frank Smets | European Central Bank

Ramon Marimon: “A New Fiscal and Monetary Framework for the EMU”

Marimon thanked everyone involved in the Ademu project, including those responsible for the administrative work, and the European Commission for a fruitful cooperation. He reiterated that the Ademu framework is closely related to the themes of the Four Presidents’ Report (2012) and the Five Presidents’ Report (2015) and provides the basis for assessing this roadmap. As in the famous quote by Jean Monnet, “Europe will be forged by crises” – indeed it is true again this time when the response to the Great Recession shapes the future European framework.

The Economic and Monetary Union consists of three unions: the Monetary Union, the Economic and Fiscal Union and the Financial Union. Each of them have different objective: price stability, economic stability and financial stability, respectively. The goal of the EMU is to solve time-inconsistency problems. Monetary Union deals with monetary policy inconsistency – mainly competitive devaluations. Economic and Fiscal Union deals with procyclical fiscal policy, while Financial Union with local bailouts. Before the crisis the EMU was incomplete. The ECB was the institution of the Monetary Union, while the Stability and Growth Pact – of the Economic and Fiscal Union. It turned out that they alone were not enough. The Economic and Fiscal union had to be supplemented with the Fiscal Compact, the Macroeconomic Imbalance Procedure, the European Semester and the European Stability Mechanism, while the Financial Union had to be created from the beginning, which resulted in the creation of the Banking Union with the Single Supervisory Mechanism and the Single Resolution Mechanism (which still needs a fiscal backstop) and the work on the Capital Markets Union and the European Deposit Insurance Scheme. The comparison of this reformed EMU with the US and other Federal States suggests that the Monetary Union is similar and more or less complete but there is no European Treasury, the Fiscal Union is more complex, idiosyncratic and incoherent, while the Financial Union is similar but also more complex and still incomplete.

The Economic and Fiscal Union as it is now is still based on the peer pressure among the member states. The system does not provide effective carrots and constrains the members’ performance by non-credible sticks. The watchdog surveillance is mixed with non-credible threats. The ESM plays only the role of a crisis resolution mechanism. Current stabilisation policies involve the European Fund of Strategic Investments but it is more growth oriented and less focused on stabilisation. When the old framework was designed the assumption was that first countries should converge and then risk sharing will be developed. But this turned out to be wrong – risk sharing is needed in crisis and in normal times and we need now to design the architecture of the EMU taking into account the heterogeneity. The debt contracts in the EMU should be ex post state contingent, depending on the history of the shocks and members performance. Non-contingent risky debt should be transformed into state-contingent riskless asset. This is precisely the idea of the Ademu proposal on the European Stability Fund, which offers long term contracts respecting sovereignty, induces counter-cyclical policies and accounts for moral hazard problems. The ESF could also act as a backstop for the SRM and the European Unemployment Insurance Scheme.

Discussion by: Joaquín Almunia

Almunia said that although he is a former vice-president of the European Commission he would focus on his personal ideas. He referred to the documents outlining the EMU roadmap: the Four Presidents’ Report, the White Paper of the Commission and the Five Presidents’ Report. It initially seemed, he said, the Merkel-Macron duo would speed up the process of reforming the EMU, but he is now more pessimistic, taking into account the difficulties of the Franco-German tandem in making effective proposals and political developments in Italy. He indicated that we are still waiting for the Single Resolution Mechanism backstop and the deposit insurance scheme even though we are so many years after launching the Banking Union. Completion of the Banking Union is now one of the priorities.

He called for simpler fiscal rules as current ones (six-pack, two-pack, Fiscal Compact) are difficult to be enforced in a clear way and subject to different interpretations. We are now in a very different fiscal situation than in 2012, meaning room for maneuver is more constrained and will be even more so after the replacement of Mario Draghi as the president of the ECB. Budget deficits are small but the stock of debt is still high and interest rates are likely to stay low for some time. What needs to be discussed is the solution to the debt overhang problem and crisis management and resolution mechanism for future crises. Fiscal policy should play a more active role and take some responsibilities of monetary policy. Also important is democratic legitimacy – with current deficit of democratic support we will not be able to provide stability.

He said there are five crucial elements of the design of the EMU architecture. The new framework needs to specify the orientation of fiscal policy in the EMU after the correction of the most difficult fiscal imbalances and to determine how to coordinate fiscal policy of the member states, especially when monetary policy is not as accommodative as it is now. Risk sharing and risk reduction are needed – the (halfway) mix between the two needs to be achieved politically. Policy proposals and decisions should be first, followed by institutional setup decision; we need to know the responsibilities of a given function or institution before appointing someone to the role. The community method of making decisions needs to be preferred over the intergovernmental agreements. For instance, finance ministers of the Eurogroup are not convinced that they responsible for the euro area, contrary to the EBC governors. Fiscal policy of the EMU has very little political power, but the proposed European Stability Fund may help to improve it. More growth is needed to resume the real convergence of the euro area; wthout convergence. the system will not be sustainable. Better coordination of policies is needed, including completion of the Single Market, to advance to the real economic union.

Discussion by: Roel Beetsma

Beetsma reviewed the European Commission’s roadmap, its aims and proposals including the establishment of the European Monetary Fund, integration of the Treaty on Stability, Coordination and Governance into the EU legal framework and a discussion on new budgetary instruments for the euro area and European Minister of Economy and Finance. He discussed risk sharing and risk reduction and said the European Fiscal Board agrees that the two are complements and a progress on both fronts is needed. Structural reform got stuck due to opposing views – the advocates of risk reduction do not want to pay for mistakes of others, while the advocates of risk sharing see themselves as victims of moral diktat.

With regard to fiscal policy reform, he reminded participants that the revision of fiscal rules has been postponed until 2020 or later. This is unfortunate, he said, because enhanced fiscal risk sharing via a Central Fiscal Capacity (CFC) and risk reduction via reform of fiscal rules need to go together. The optimal currency areas theory has long made clear the need for risk sharing in the EMU. In the EMU private risk sharing is very limited, but we will see more once there is more Banking Union. More financial integration will facilitate private risk sharing, but it may also rise the volatility in times of crisis – then public risk sharing is particularly needed. The European Fiscal Board is strongly in favor of a Central Fiscal Capacity. However, the access to a CFC should be conditional on compliance with fiscal rules.

The reform of the Stability and Growth Pact is needed since so far it worked imperfectly due to procyclicality and has become very complex. It should be simplified and feature one objective, simple operational rule and parsimonious use of escape rules. Complete contracts cannot be drawn which means that expert judgement is unavoidable but the question is who should exercise it. In the Ademu proposal on the European Stability Fund money flows are automatic, based on the state-contingent contract, but the question is how it would work in practice. The role on assessment and political decision on further steps should be separated for independent assessment to gain more importance. Finally he referred to the paper by Benassy-Quere et al. (2018) (14 French and German economists’ proposal) indicating that junior bonds financing new debt for spending above a benchmark may fulfil a useful role of strengthening the fiscal discipline but they also carry risk of judgement when threshold has been exceeded and problems related to legal enforcement.

Discussion by: Marco Buti

Buti said it is not possible to solve all problems with one algorithm. In the EU still there is no one common narrative and there are tensions between the euro area as a whole and national perspectives. The EMU today is an unsustainable equilibrium between incomplete banking and capital markets union, insufficient adjustment mechanisms and no central fiscal stabilisation function. We need to deal with insufficient private and public risk sharing, overburdened monetary policy and the risk of renewed financial instability. A central fiscal stabilisation capacity is vital for the EMU sustainability, hence the Commission welcomes the Ademu proposals on the European Unemployment Insurance Scheme and the European Stability Fund. The EUIS proposal meets the key criteria: sizeable stabilisation, no permanent transfers, no change to existing national labour markets/unemployment insurance policies. The proposal on the ESF is timely and ambitious. He likes the idea that countries should contribute more in good times and that the contract terms are based on their risk history and exerted effort to deal with a moral hazard problem but asked how theoretical contracts would work in practice. He indicated that the Commission proposes that central fiscal stabilisation should be aimed at maintaining the investment base and triggered by high unemployment levels, but first it should focus on cheap loans and later on insurance instrument. The European Stability Mechanism should be strengthened (but not with a contract) and be based on providing cheap loans with light conditionality. These should be precautionary instruments to avoid a loss of market access. The contribution of Ademu is very timely and helpful in designing these institutions, he said.

He then proceeeded to debunk four myths. The first one states that lack of fiscal discipline means that fiscal rules do not work. On the contrary, the data and forecasts show something different. The second myth is that a central fiscal stabilisation function is not necessary if countries rebuild their fiscal buffers. This is not true since when large shocks hit, tax-intensive GDP components fall down and tax base collapses, so then large fiscal capacity is needed. Meeting medium-term objectives is not enough when a country is faced with large shocks. However, there is room for exceeding a 3% deficit cap as long as the market keeps lending to a given state. The third myth is that a sovereign debt restructuring mechanism we will be finished with the bailout culture. This is not true since sovereign debt restructuring needs to be done when it has to be done (in cases of debt unsustainability). Risk reduction and risk sharing are bad alternatives – without risk sharing there will be more bailouts. According to the fourth myth completing the Banking Union is all that is needed; it is needed but it is not enough in bad times due to pro-cyclicality of financial markets. Private risk sharing cannot be relied upon, so public risk sharing has to be there as well.

Discussion by: Paivi Leino-Sandberg

In the EMU, said Leino-Sandberg, we have a highly centralised monetary policy pillar, but the other two pillars of the Union are incomplete and not integrated enough. They need to be brought to the same level, as in the case of the Monetary Union. She asked if this is the correct gap to be examined; maybe we are forgetting a more relevant gap – the Political Union pillar – which has not been subject to much discussion over the past two days.

The political union needs to draw a line between what is done at state level and at the federation level. That division needs to be backed up politically. Some elements in the discussed proposals have never been considered by the established federations due to the political nature of transferring money and risk. Since the EU is a constitutional democracy, politics needs to play a role in this structure. There is a potential to bring political integration forward. The EU is successful in regulatory issues. Learning from this experience, it should navigate towards a regulatory state with common rules, monitoring and enforcements. Trouble arises when institutional solutions are taken from states and given to technocrats. As the nature of these tasks does not change, the political consensus should be sustainable. If the political issues do not kick back when adopting the measures, they will kick back at later stage while trying to enforce them. It cannot be done without politics.

Discussion by: Frank Smets

Smets said the EMU is an interesting case for research, involving a complex and challenging issues. Studying these problems requires taking into account all the trade-offs, while thinking of them through models. The Monetary Union can be seen through the lens of the optimal currency area (OCA) theory but Ramon Marimon’s interpretation of solving time inconsistency is also very useful. The question is how the time inconsistencies of fiscal and financial nature affect the monetary policy of the ECB – how large the risks of fiscal and financial dominance to monetary policy are. The Maastricht Treaty has dealt with this issue quite well by giving the ECB a clear mandate (price stability) and accountability. If the ECB is perceived as a strong institution, it is because of the strong rules it follows. The ECB has delivered in terms of price stability and kept inflation expectations at the right level. Exceptional times, however, called for drastic and unconventional measures such as quantitative easing (QE), long term refinancing operations (LTRO) and negative interest rates. The use of those tools have been justified based on the mandate of price stability (even if there are some side benefits and side costs due to incompleteness of the Union) to repair the monetary policy transmission mechanism, address specific fragmentation problems and more recently to deal with the zero lower bound (ZLB) problem.

The Financial Union – the Banking Union – needs to be completed to strengthen the European financial sector and make it more robust and integrated. An important part of the architecture is prevention – supervision needs to be effective, strong and fair. The system needs to have effective measures related to prevention and crisis management. A common backstop from the Fiscal Union is also needed. One cannot have a currency union if deposits across the common currency area are not completely identical in terms of risk and transferrable. Besides the Financial Union we need to continue to pursue structural reforms to make the economies more resilient. Although they are in the interest of national governments, they are not done due to political constraints, vested interests and time-inconsistency. It is important to think about how to politically survive doing these reforms and how to overcome these constraints. Finally, the Fiscal Union needs to be completed. He thinks that the Ademu proposal of the ESF is a comprehensive and realistic approach. It deals with right incentives and constraints in place, addresses moral hazard problems, creates safe assets and takes into account the reality that we are not a transfer union. Generally, the ESF deals with every aspect in a simple framework. All of these are to be applauded. The question that remains is how it can be implemented in practice. He liked the idea of 14 French and German economists related to senior and junior bonds, allowing for more market discipline.

Floor discussion:

A comment from the audience about a central fiscal capacity and fiscal rules implied that the reason for breaking fiscal rules was not a lack of understanding them but the fact that they unpopular at home. This will not change even though a central fiscal capacity will be more transparent.

Roel Beetsma said that some members states want a combination of more risk sharing with a central fiscal capacity, while others are not satisfied with current enforcement of fiscal criteria. To deal with these views, fiscal criteria need to be stricter but to include escape rules. It is crucial to make it easier to obey fiscal rules and to combine risk reduction and risk sharing.

Xavier Vidal-Folch (El País) pointed out that risk sharing and risk reduction were a common theme during the talks and asked whether this can form the basis for the road ahead. Joaquín Almunia answered that they need to be traded-off against each other.

Marco Buti pointed out that certain elements of the new framework could belong to both risk sharing and reduction and one should not be too dogmatic when looking where to assign certain solutions.

Coen Teulings (University of Cambridge) said that one cannot simply extrapolate the behavior before the crises to the period afterwards. In his opinion doing monetary policy is more technical, while fiscal policy is more political. Ramon Marimon suggested the solution is to think for the whole euro area and EU, like the ECB but also on the fiscal side. This requires a development of a certain institutional build-up.

Martin Sandbu (Financial Times) asked about the potential limits of the ESF by suggesting the following scenario. Suppose that the ESF is already in place, but due to limits to solidarity it has only 5% of the euro area GDP at its disposal. Then it lends to its maximal capacity, but there might be more needed (as in the previous crisis). In this situation one can end up in a very similar crisis like we have now as ultimately the access to financial markets is lost.

Ramon Marimon responded that the ESF takes care of this scenario as well: the risk assessment ex ante built into the system ensures debt is sustainable. In the described situation, debt can be cut into three tranches: sustainable, unsustainable (restructurable) and the middle one – the line needs to be drawn somewhere. The middle tranche will be taken care of by the ESF by making it safe (the ESF is made safe by design). This is not like the junior bonds which would contain all the risk.

Rody Manuelli (Washington University in St. Louis) asked why financial markets do not like state contingent bonds and if there is such a great demand for insurance why countries do not issue state-contingent securities. The panel participants agreed that the problem is the liquidity of such potential instruments which might be low. Holding state contingent debt securities might be costly for households and for firms as it may be hard for them to properly assess the risk of these instruments.

Ramon Marimon closed the conference.