EuSpRIG London conference: “Spreadsheet Governance – Policy and Practice”

To assist investors in the evaluation process of new investment opportunities, visit
Pristine has developed an open source valuation model. This model can be used for pricing and financial statement analysis to identify strong investment opportunities which, shop
for example, erectile
can be used for IPO market value analysis.

Pristine wants to make this analysis tool public to help smaller companies make informed investment decisions. By changing the assumptions of the model users can analyse different financial scenarios and identify strong investment opportunities.

Pristine’s model uses LinkedIn IPO financial statement data to present an example.

Download the workbook here.

Absolute or Relative Valuation in Excel

Valuing a company’s stock using absolute valuation or relative valuation is required in most of the fields of finance. In this article we list the steps needed in the evaluation of a company. We have used LinkedIn Corporation’s $175 million issue to illustrate these steps:

Issue Information:

  • Issue Size: $ 175 million
  • Stocks to be issued: N number of Class A common Stock of $ 0.0001 par value
  • Expected Market Capitalization: To be estimated

LinkedIn is one of the world’s largest professional network on the Internet with more than 90 million members in over 200 countries and territories. Through their proprietary platform, members are able to create, manage and share their professional identity online, build and engage with their professional network, access shared knowledge and insights, and find business opportunities, enabling them to be more productive and successful.

LinkedIn’s comprehensive platform provides members with solutions, including applications and tools, to search, connect and communicate with business contacts, learn about attractive career opportunities, join industry groups, research organizations and share information.

Defining Key Metrics of LinkedIn:

The key metrics can be extracted from the S-1 filing of the company filed with SEC. The key metrics for LinkedIn are:

  • Number of Registered Members: Presently 90 million, but most of them don’t contribute in revenues
  • Unique Visitors: LinkedIn defines unique visitors as users who visited LinkedIn at least once during a month regardless of whether they are a member
  • Page Views: These are the number of pages on LinkedIn that users view during the period in which page views are measured.

Revenue Heads:

The most important task is getting information about the revenue sources which can also be extracted from the S-1 filing or from the news. Major revenue heads of LinkedIn include:

  • Revenue from LinkedIn Corporate Solutions Customers: Number of LinkedIn Corporate Solutions customers are the number of enterprises and professional organizations that have active contracts for the product.
  • Revenues from LinkedIn Premium Subscribers

To get the revenue we need to get to the basic parameters which are steering the revenue numbers. In LinkedIn’s case it was Number of users (Parameter A) and the Charges users pay (Parameter B) for the service. So we get those historical numbers and project them in future to get the revenues by multiplying Number of users with the charges per user i.e. A x B.

Major Cost Heads:

  • Cost of Revenue
  • Sales and Marketing
  • Product Development
  • General and Administrative
  • Depreciation and Amortization
  • Other Income (Expense), Net

The cost calculations are also worked in a similar manner as revenue calculations. We define a formula for the cost calculations (A x B) and get the final numbers by projecting the values of the parameters (A and B).

Building the Asset Base:

The next important step is building the asset base using the proceeds of the issue. This is completely management’s prerogative so we need to check the Management’s Discussion and Analysis to get the hint of prospective usage of the funds. If you don’t get any concrete idea you can project the assets by some generalized rule such as increasing assets using growth rate of revenues.

Analyzing the statements:

The performance of the company can be measured around using various ratios.

Return on Equity can also be broken into its components using Du Pont Analysis.

Valuation:

There are two types of valuation methods as discussed above.

  • Absolute Valuation: Once all the statements are in place, the free cash flow to the firm and equity is calculated which is discounted at WACC and Cost of equity respectively to get the Firm Value and Equity Value of the company.
  • Relative Valuation: In relative valuation the ratios of comparable companies are taken and the price is calculated using those ratios. e.g.: Average P/E of comparable companies is 5 and the EPS of LinkedIn is $10, then assuming that LinkedIn will also trade at same P/E will yield us the price of LinkedIn as $50.

Last Step:

The last step of the valuation process is summarizing the valuation in one sheet which can be done using the football field, in which different ratios will give different ranges of price and the price from absolute valuation will lie somewhere on those price ranges or outside those ranges.

The content of this article is the opinion of the author.
Whether you are a consultant building a model for a client, for sale or an internal modeller, treatment
you or the person who has commissioned the model build will – understandably – want to know how long it will take.  The answer is never straight-forward, as like many other tasks  it really depends on how long you have got (and there’s never enough time!) and how much detail you need to go into.  The more time you’ve got, the better the model will be!  Some models could take months and months of dedicated work, or you could throw together a very high level model in a day or two.
In a high level model, the assumptions would probably only estimates, as you won’t have had time to validate them with stakeholders, and the calculations will be pretty rough.  You also might not have much in the way of fancy colours, formatting, drop-down boxes or tick boxes etc, but the numbers should still be reasonably accurate.

Building a Model Under Pressure

It’s a critical point to remember that even when under immense time pressure, the modeller should never compromise on good working practices.  Even in a high level model, best practice should still be followed, correct labeling and documentation of assumptions should be maintained.  See Best Practice in Financial Modelling for some guidelines on good practice.  If these points have been adhered to, there should be surprisingly little difference in the base numerical outcome between a high-level model that takes a few days, and a detailed model which could take months.  If pressed for time, cosmetic features such as those shown below should be omitted.
Time permitting, the detailed model may show:

  1. Detailed assumptions documentation, validated by key project stakeholders
  2. Scenarios and sensitivity analysis, using drop-down boxes, tick boxes or data tables
  3. Table of contents or navigation tools
  4. Colours and formatting, conditional formatting, insertion of company logos
  5. Output summary and detailed analysis of output

Time should be spent on “quick wins” – use your judgement to spend your time on calculations that are material to the model.  Don’t waste time on validating minor assumptions which are not material to the outcome of the model.
Whether you are a consultant building a model for a client, approved
or an internal modeller, you or the person who has commissioned the model build will – understandably – want to know how long it will take.  The answer is never straight-forward, as like many other tasks  it really depends on how long you have got (and there’s never enough time!) and how much detail you need to go into.  The more time you’ve got, the better the model will be!  Some models could take months and months of dedicated work, or you could throw together a very high level model in a day or two.
In a high level model, the assumptions would probably only estimates, as you won’t have had time to validate them with stakeholders, and the calculations will be pretty rough.  You also might not have much in the way of fancy colours, formatting, drop-down boxes or tick boxes etc, but the numbers should still be reasonably accurate.

Building a Model Under Pressure

It’s a critical point to remember that even when under immense time pressure, the modeller should never compromise on good working practices.  Even in a high level model, best practice should still be followed, correct labeling and documentation of assumptions should be maintained.  See Best Practice in Financial Modelling for some guidelines on good practice.  If these points have been adhered to, there should be surprisingly little difference in the base numerical outcome between a high-level model that takes a few days, and a detailed model which could take months.  If pressed for time, cosmetic features such as those shown below should be omitted.
Time permitting, the detailed model may show:

  1. Detailed assumptions documentation, validated by key project stakeholders
  2. Scenarios and sensitivity analysis, using drop-down boxes, tick boxes or data tables
  3. Table of contents or navigation tools
  4. Colours and formatting, conditional formatting, insertion of company logos
  5. Output summary and detailed analysis of output

Time should be spent on “quick wins” – use your judgement to spend your time on calculations that are material to the model.  Don’t waste time on validating minor assumptions which are not material to the outcome of the model.
The EuSpRIG (European Spreadsheet Risks Interest Group) conference is on this week in Greenwich, mind
London. The EuSpRIG conference is a premier spreadsheet risk management conference and the place to be if you are serious about investigating, decease
managing and controlling the myriad risks posed by corporate spreadsheets.

What: EuSpRIG conference “Spreadsheet Governance – Policy and Practice”

Where: Greenwich, information pills
London

When: 14-15 July 2011

Corality is joining forces with Morten Siersted from F1F9 for our contribution to this conference. The topic of our session is “Spreadsheets Grow Up – Update 2011″, and refers to the improvements in financial modelling standards generated by the SMART modelling methodology (Corality Financial Group) and FAST (F1F9).

The conference programme including all sessions and speakers is available for download here.

EuSpRIG’s 12th annual conference programme, “Spreadsheet Governance – Policy and Practice”, is packed with great speakers and authorities on spreadsheet risk. Here is an overview of the topics discussed, also accessible for download on the EuSpRIG website.

EuSpRIG 2011: Spreadsheet Governance – Policy and Practice

Spreadsheets on the Move: An Evaluation of Mobile Spreadsheets

Derek Flood, Rachel Harrison, Kevin Mc Daid

The power of mobile devices has increased dramatically in the last few years. These devices are becoming more sophisticated allowing users to accomplish a wide variety of tasks while on the move. The increasingly mobile nature of business has meant that more users will need access to spreadsheets while away from their desktop and laptop computers. Existing mobile applications suffer from a number of usability issues that make using spreadsheets in this way more difficult. This work represents the first evaluation of mobile spreadsheet applications. Through a pilot survey the needs and experiences of experienced spreadsheet users was examined. The range of spreadsheet apps available for the iOS platform was also evaluated in light of these users’ needs.

A Platform for Spreadsheet Composition

Pierpaolo Baglietto, Martino Fornasa, Simone Mangiante, Massimo Maresca, Andrea Parodi, Michele Stecca

The process of elaborating spreadsheet data is often performed in a distributed, collaborative way, which may lead to errors in copy-paste operations, loss of alignment and coherency due to multiple spreadsheet copies in circulation, as well as loss of data due to broken cross-spreadsheet links. In this paper we describe a methodology, based on a Spreadsheet Composition Platform, which greatly reduces these risks. The proposed platform seamlessly integrates the distributed spreadsheet elaboration, supports the commonly known spreadsheet tools for data processing and helps organisations to adopt a more controlled and secure environment for data fusion.

Controls over Spreadsheets for Financial Reporting in Practice

Nancy Coster, Linda Leon, Lawrence Kalbers, and Dolphy Abraham

This paper describes a survey involving 38 participants from the United States, representing companies that were working on compliance with the Sarbanes-Oxley Act of 2002 (SOX) as it relates to spreadsheets for financial reporting. The findings of this survey describe specific controls organisations have implemented to manage spreadsheets for financial reporting throughout the spreadsheet’s lifecycle. Our findings indicate that there are problems in all stages of a spreadsheet’s life cycle and suggest several important areas for future research.

In Search of a Taxonomy for Classifying Qualitative Spreadsheet Errors

Zbigniew Przasnyski, Linda Leon, and Kala Chand Seal

In this paper, we propose a taxonomy for categorising qualitative errors in spreadsheet models that offers a framework for evaluating the readiness of a spreadsheet model before it is released for use by others in the organisation. The classification was developed based on types of qualitative errors identified in the literature and errors committed by end-users in developing a spreadsheet model for Panko’s (1996) “Wall problem.” Closer inspection of the errors reveals four logical groupings of the errors creating four categories of qualitative errors. The usability and limitations of the proposed taxonomy and areas for future extension are discussed.

Breviz: Visualizing Spreadsheets using Dataflow Diagrams

Felienne Hermans, Martin Pinzger, Arie van Deursen

In previous work we have analysed the information needs of spreadsheet professionals and addressed their need for support with the transition of a spreadsheet to a colleague with the generation of data flow diagrams. In this paper we describe the application of these data flow diagrams for the purpose of understanding a spreadsheet with three example cases. We furthermore suggest an additional application of the data flow diagrams: the assessment of the quality of the spreadsheet’s design.

Requirements for Automated Assessment of Spreadsheet Maintainability

José Pedro Correia, Miguel A. Ferreira

In this position paper we argue for the need to create a model to estimate the maintainability of a spreadsheet based on (automated) measurement. We propose to do so by applying a structured methodology that has already shown its value in the estimation of maintainability of software products. We also argue for the creation of a curated, community-contributed repository of spreadsheets.

From Good Practices to Effective Policies for Preventing Errors in Spreadsheets

Daniel Kulesz

Good policies should specify rules which are based on “known-good“ practices. While there are many proposals for such practices in literature written by practitioners and researchers, they are often not consistent with each other. Therefore no general agreement has been reached yet and no science-based “golden rules” have been published. This paper proposes an expert-based, retrospective approach to the identification of good practices for spreadsheets. It is based on an evaluation loop that cross-validates the findings of human domain experts against rules implemented in a semi-automated spreadsheet workbench, taking into account the context in which the spreadsheets are used.

Effect of Range Naming Conventions on Reliability and Development Time for Simple Spreadsheet Formulas

Ruth McKeever, Kevin McDaid

This paper presents the results of two iterations of a new experiment, which measure the effect of range names on the correctness of, and the time it takes to develop, simple summation formulas. Our findings, supported by statistically significant results, show that formulas developed by non-experts using range names are more likely to contain errors and take longer to develop. This paper is important in that it finds that the choice of naming convention can have a significant impact on novice and intermediate users’ performance in formula development, with less structured naming conventions resulting in poorer performance by users.

An Empirical Study on End-users Productivity Using Model-based Spreadsheets

Laura Beckwith, Jácome Cunha, João Paulo Fernandes, João Saraiva

To improve end-users productivity, recent research proposes the use of a model-driven engineering approach to spreadsheets. In this paper we conduct the first systematic empirical study to assess the effectiveness and efficiency of this approach. A set of spreadsheet end-users worked with two different model-based spreadsheets, and we present and analyze here the results achieved.

Leveraging User Profile and Behavior to Design Practical Spreadsheet Controls for the Finance Function

Nancy Wu

Recognizing that the use of spreadsheets within finance will likely not subside in the near future, this paper discusses a major barrier that is preventing more organisations from adopting enterprise spreadsheet management programs. But even without a corporate mandated effort to improve spreadsheet controls, finance functions can still take simple yet effective steps to start managing the risk of errors in key spreadsheets by strategically selecting controls that complement existing user practice.

Spreadsheet on Cloud – Framework for Learning and Health Management System

1K.S. Preeti, Vijit Singh, Sushant Bhatia , Ekansh Preet Singh, Manu Sheel Gupta

We have proposed a Spreadsheet on the cloud as the framework for building new web applications, which will be useful in various scenarios, specifically a School administration system and governance scenarios, such as Health and Administration. This paper is a manifestation of this work, and contains some use cases and architectures which can be used to realise these scenarios in the most efficient manner.

Towards Evaluating the Quality of a Spreadsheet: The Case of the Analytical Spreadsheet Model

Thomas A. Grossman, Vijay Mehrotra, Johncharles Sander

We consider the challenge of creating guidelines to evaluate the quality of a spreadsheet model. We suggest four principles. First, state the domain—the spreadsheets to which the guidelines apply. Second, distinguish between the process by which a spreadsheet is constructed from the resulting spreadsheet artifact. Third, guidelines should be written in terms of the artifact, independent of the process. Fourth, the meaning of “quality” must be defined. We illustrate these principles with an example. We define the domain of “analytical spreadsheet models”, which are used in business, finance, engineering, and science. We propose for discussion a framework and terminology for evaluating the quality of analytical spreadsheet models. This framework categorises and generalise the findings of previous work on the more narrow domain of financial spreadsheet models. We suggest that ultimate goal is a set of guidelines for an evaluator, and a checklist for a developer.

Workbook Structure Analysis – “Coping with the Imperfect”

Bill Bekenn and Ray Hooper

This Paper summarises the operation of software developed for the analysis of workbook structure. This comprises: the identification of layout in terms of filled areas formed into “Stripes”, the identification of all the Formula Blocks/Cells and the identification of Data Blocks/Cells referenced by those formulas. This development forms part of our FormulaDataSleuth® toolset. It is essential for the initial “Watching” of an existing workbook and enables the workbook to be subsequently managed and protected from damage.

Spreadsheets in Financial Departments

Dr. Kevin McDaid, Dr Ronan MacRuairi, Mr. Neil Clynch, Mr. Kevin Logue, Mr. Cian Clancy, Mr. Shane Hayes

This paper represents an important attempt at profiling real and substantial spreadsheet repositories. Using the Luminous technology an analysis of 65,000 spreadsheets for the financial departments of both a government and a private commercial organisation was conducted. This provides an important insight into the nature and structure of these spreadsheets, the links between them, the existence and nature of macros and the level of repetitive processes performed through the spreadsheets. Furthermore it highlights the organisational dependence on spreadsheets and the range and number of spreadsheets dealt with by individuals on a daily basis. In so doing, this paper prompts important questions that can frame future research in the domain.

Beyond The Desktop Spreadsheet

Gordon Guthrie, Stephen McCrory

Hypernumbers is a new commercial web-based spreadsheet attempting to address spreadsheet risk in two radically new ways. The core approach is not to mitigate risk but to engineer out risky activities.

Hypernumbers enables barriers between data and programme instructions to be simply and reliably reimposed, thus draining off a whole category of run-time errors. This separation allows spreadsheet usage to be split into two distinct phases – development, where audit and testing can reduce errors, and deployment, where what would be risky practices in other spreadsheet paradigms are simply engineered out.

An Insight into Spreadsheet User Behaviour through an Analysis of EuSpRIG Website Statistics

Grenville J. Croll

The European Spreadsheet Risks Interest Group (EuSpRIG) has maintained a website almost since its inception in 2000. We present here longitudinal and cross-sectional statistics from the website log in order to shed some light upon end-user activity in the EuSpRIG domain.

About EuSpRIG (European Spreadsheet Risks Interest Group)

EuSpRIG offers Students, Professors, Directors, Managers and Professionals in all disciplines independent, authoritative and comprehensive web based information on the current state of the art in Spreadsheet Risk Management. EuSpRIG is a voluntary and non-profit organisation.

Recent posts by Rickard Wärnelid

Comment on this Article