Skip to content

Transparency in investment decision making (Case study)

Reading time: 4 min

We recently completed a project, where we developed a data-based system for our Client that allows their experts to:

  • measure the return on their previous investment decisions according to a standard methodology and automatically, and
  • estimate the return potential for future investment plans with data-based models, all in accordance with the definitions and methodology used during backtesting.

In this post, we discuss the business benefits and lessons learned from the project.

Our Client’s long-standing desire has been fulfilled with the development of a data-driven decision support system. Dozens of investment decisions are made each year, often in the order of millions of euros. However, decisions in this regard are made primarily based on expert estimates. In doing so, of course, data is used, but it is never possible to properly prepare all available data to support a decision. An even bigger problem was that each data element was used according to different methodologies, so it was not possible to measure and compare investments uniformly.

Thus, the primary goal of the developed system was to create an objective, data-based alternative in parallel with the expert estimates, and similarly, high priority was given to the creation of unified methodology-based backtesting. It also serves reporting and monitoring purposes as follows.

1 – Standardizes methodologies, facilitates interpretation and creates comparability

It unifies, brings backtesting and planning to a common denominator. While backtesting had been a typical controlling task at our Client, the preparation of investment plans had been realized as a calculation based on engineering-expert inputs, but eventually returned to the controlling methodological framework. One of the main advantages of the system is that it brings these two approaches to a common denominator and measures performance in key indicators that can be understood by all participants, such as payback time, net present value (NPV), and internal rate of return (IRR). On the other hand, as the system can be used across countries, the results can be compared, and good practices are displayed transparently.

Thus, backtesting and forecasting results can be easily understood and compared using the common methodology (e.g., on maps on the dashboard and a dozen other dimensions), the potentials ultimately used in investment planning can be checked, and the return on investment benchmarks can be refined. Overall, the process becomes more accurate, understandable, and consistent.

2 – Supports a complex planning process but does not replace decision making

The investment decision preparation process at our Client is as follows: first, the best ideas are selected from the longlist of investment opportunities based on expert judgment, then the experts give a bottom-up cost and revenue estimate for the best options in the shortlist, mainly using non-financial considerations. Finally, based on these quantitative inputs, the discounted cash flow model is created manually in an Excel template, where the indicators that are most closely monitored when making the investment decision are calculated.

Our Client had identified several problems with the above process, such as that the shortlisting methodology from longlist may not be correct at all, or that there may be a lot of glitches in the potential calculation based on an Excel template. The latter is especially important because in our case we cannot talk about a user-friendly Excel template, but a hardcore financial model that can be said with enough self-criticism that it is easier to make mistakes than to fill it in correctly.

The improved system helps to solve the above-mentioned problems in a way that still does not replace decision-making. Due to the unique nature of investments, there will always be aspects that cannot be measured based on previous decisions. However, it facilitates prioritization, as an objective, data-based model that learns from the results of previous investments, can be run automatically for all elements of the longlist, so experts also have an alternative shortlist. It also supports the verification of experts’ bottom-up estimates, as the potential estimation of the models and the measured performance of previous investments continuously refine the benchmark.

3 – Transparent and validated, displays up-to-date information

Transparent: Our Client has a data democracy policy  which means all relevant colleagues can see the published dashboards of the system. The dashboards also detail the business explanations of each indicator, the data sources, and the underlying controlling/financial methodologies are gathered in a separate e-learning material, so everyone can get an understanding even if they see the dashboard for the first time. If an investment, based on factual data, is not likely to be returned within a reasonable time, experts can analyze the reasons on the more detailed dashboard pages, as well as data on top-performer investments.

Validated: the system is mostly validated, based on data warehouse data (e.g. aggregate data from operational source systems from the transaction level) and manual inputs (e.g. business premises, previous investment proposals), so most data errors are filtered out. Should any data error still occur in the machine, it is indicated by outstandingly good or bad payback results. (This validation alone would have been sufficient to recoup this project, as we found miscalculations in the preparation of previous investments in several cases)

Up-to-date: the data is loaded automatically, so as soon as another period is available, the calculations run automatically and the dashboards display the new values ​​(including the values ​​projected for the future based on the premises). An additional benefit of automatic loading is that it does not require time from experts to convert the fresh data into the correct format and also eliminates the associated possibility of errors.

Lessons learned

During the project, both our Client and we learned a lot about systems that support expert decisions based on data. Our conclusion is summarized in the following five points:

  1. Greater emphasis on planning: finding the business methodology behind the potential, negotiating between multiple participants took much more time than previously thought. The fact that several key players changed positions or jobs during the project made the case worse, so it is a lesson that knowledge transfer should be given attention to similar projects in the future.
  2. Identification and involvement of the “product owner”: decision-making accelerated when each function got its “owner”, i.e. the future main user. Without this, making decisions would have been very difficult.
  3. Test preparation: while for the backtesting functionality, the pre-made test cases were much helpful and clear, for the predictions it was not at all clear how to test. Finally, we chose the simulation method, i.e., for certain investments at a selected time point, we assumed it did not happen, then we ran the models for the plans and finally compared the predictions with the actual data.
  4. Methodology vs. accuracy of data models: although the methodology mentioned several times was understandable, it also had its limits, as there were few data points available to teach the models that fit the methodology. In our case, it was the lesser bad, so the decision was in favor.
  5. Data quality: Almost all manual data had a spelling error (at least a decimal point) or a non-consistent formula, so these had to be corrected before loading into the system to prevent the “garbage-in, garbage-out” effect.


Author: Ákos Matzon – Advisory Team Leader

Other posts