Navigation path

LEft

FAQ
  

Additional tools

 

This page aims at overcoming eventual difficulties that potential ROMC participants may experience. Should the FAQs below not deal with your issue then contact romc(dash)webadmin(at)jrc(dot)ec(dot)europa(dot)eu,

ROMC general:
  1. What is the ROMC and how is it linked to RAMI?
  2. I am a first time user. How can I enter the ROMC?
  3. I have registered and entered the ROMC. What next?
  4. How can I test my model(s) using the ROMC?
  5. What is the privacy and data usage policy of ROMC?
  6. How can I contact the ROMC coordinators?
  7. Can I use my ROMC results graphics in a publication?
  8. What are the models that were used in the generation of the reference data?
ROMC navigation
  1. What is the difference between DEBUG and VALIDATE mode?
  2. I have clicked on a link in a pop-up window. How do I go back?
  3. Is there a way to add a note about my model's version when submitting results to the ROMC?
  4. What does the 'My Models > Model Comparison' link do?
  5. What does the 'My Models > Model Skill' link do?
  6. I did register myself but I do not get a new left-hand menu when logging in?
ROMC registration
  1. I did register myself but did not get a confirmation email?
  2. I am trying to register a new model but it does not work?
  3. Can I participate with multiple RT models?
  4. Can I provide additional information about my model (including pictures)?
  5. What if my model name has already been used?
ROMC Formatting
  1. Can I submit files in other formats than plain text?
  2. Do I need to adhere to the file naming convention?
  3. What format do my model simulation results files have to be in?
  4. How many of the prescribed test cases do I need to generate with my model?
  5. How many of the prescribed measurements do I need to simulate with my model?
ROMC Submission
  1. What if my model cannot exactly simulate the structure of a proposed test case?
  2. What if my model cannot generate all of the required measurements?
  3. How do I submit my model simulation results?
  4. I submitted my results in a single archive file that is being rejected. What did I do wrong?
  5. Explain the various options to speed up ROMC submissions?
ROMC Results
  1. Explain the various ROMC results graphs to me?
  2. I cannot see the graphs of my ROMC results?
  3. How is the Taylor diagram constructed?
  4. Can you explain the meaning of model SKILL?
  5. How is the Χ2 statistics on the results page computed?
  6. The title of the y-axis in my postscript files is not or only partially visible?
  7. My ROMC results graphics appear incorrect. What can I do?
  8. Can I obtain my ROMC results in encapsulated postscript format?
  9. How do I reference ROMC graphs/results?
ROMC Miscellaneous
  1. Can I compare my different ROMC submissions against themselves?
  2. Can I use the ROMC to generate graphs of the skill of my models?
  3. Can I generate 'All Experiment' graphs from a subset of one of my older ROMC submissions?
  4. Why do I need to pay attention to the reference plane when performing measurements?
  5. What if I discover an error in already submitted results?



General issues regarding the ROMC:


  1. What is the ROMC and how is it linked to RAMI? up

    The RAMI Online Model Checker (ROMC) allows owners, users and developers of radiative transfer (RT) models to obtain an indication regarding the performance of their model. As its name suggests the ROMC is closely linked to the RAdiative transfer Model Intercomparison (RAMI) exercise, and - similar to RAMI - the participation in the ROMC evaluation exercise is also voluntary, free of charge, but subject to adherence to the ROMC privacy and data usage policy).

    To assess the performance of RT models in FORWARD mode the ROMC provides a series of test cases that can be subdivided into a structurally HOMOGENEOUS set, where the spatial distribution of scatterers is the same throughout the scene, and a structurally HETEROGENEOUS set, where the spatial distribution of the scatterers is dependent on the actual location within a scene. Regardless of the structure of the selected test cases the ROMC allows for RT models to be evaluated either in DEBUG or in VALIDATE mode. In DEBUG mode, the user may choose himself the number and types of both experiments and measurement which are identical to those already featured during previous phases of RAMI such that the results are known. In VALIDATE mode, the user will be presented with test cases that are slightly different from those featured in previous phases of RAMI such that the results will not be known a priori.

    Interested ROMC users are invited to register themselves (and their model) and to implement the test cases that are presented to them. Once they have performed the required model simulations they may submit their results on-line. These data will then be checked against the ROMC formatting and file-naming convention (identical to those of the RAMI exercise see below) i and then compared against a reference set of BRF data. The reference set itself is generated from an ensemble of 3-D RT models that have been identified as `most appropriate' during the 3rd RAMI phase.

    Whereas the RAMI exercise aims at the intercomparison of models across large sets of different structural, spectral and illumination conditions, the ROMC provides an indication of the performance of RT models using only a small ensemble of test cases. However, the ROMC is an on-line checking tool thereby allowing modelers, developers and users of RT models to constantly verify the performance of their models, whereas new phases of RAMI are only conducted at intervals of 2-3 years.


  2. I am a first time user. How can I enter the ROMC? up

    To use the ROMC users have to register. If you are a first time user click on the 'New User Registration' link located on the bottom left hand part of the ROMC front page. If you are an veteran user just type your user-name and password into the boxes provided and then click the submit button. New users will be shown a registration page, where they have to provide a few informations (mandatory items are labeled with a red asterisk: *), e.g., their user-name and password. Please pay particular attention to the correct spelling of your email address since we will send you a confirmation email to that address. When you have filled in the various mandatory fields on the registration page click on the submit button to launch the registration procedure. A window will inform you that we have send an email to the email address that you indicated previously. At this stage you are a ROMC user in stand-by mode, that is, you won't be able to enter the ROMC with your user-name and password until you have opened the email we just send you and clicked on the I confirm this request link that is provided there within. This action will activate your ROMC user account and you will now be able to key in your user-name and password at the ROMC front-page to enter.


  3. I have registered and entered the ROMC. What next? up

    Once you have entered the ROMC you may use the various menus on the left-hand navigation menu to move within the ROMC. ROMC Info contains links to measurements definitions, results formatting and file naming conventions, as well as this FAQ page. My details will allow you to change (some of) your registration details. My models allows you to register a new model (this is the first step toward checking your model's performance), view your models' performances, as well as to 'compare' different runs of your model, or to view the entire list of models that have been evaluated using ROMC. The Logout button will exit you from the ROMC.

    To evaluate your RT model, you should thus first register your model using the 'My Models > Register a new model' link. Once you have done this you will be automatically taken to the 'My Models > My model performances' page where a four-column table with your model name as header-line will be presented to you. Here, in the third column, you may indicate both the "ROMC usage" type (DEBUG or VALIDATE) and then also the "canopy structure" type in order to specify the way in which ROMC will evaluate your model, as well as, the dimensionality of the test cases used to do so (homogeneous versus heterogeneous vegetation canopies), respectively. The "ROMC usage" called DEBUG mode, enables a user to choose him/herself the number and types of both experiments and measurement which, in turn, are identical to those that featured in previous phases of RAMI: In other words in DEBUG mode the results of the simulations are already publicly known/published. In VALIDATE mode, on the other hand, a user will be presented with a small set of test cases that are slightly different from those featuring in previous phases of RAMI such that the model simulation results will not be known a priori. Note that the selection of test cases provided by VALIDATE mode cannot be changed. For more info see the next FAQ.


  4. How can I test my model(s) using the ROMC? up

    The first step to check the performance of your RT model is to register it. This can be done by selecting the 'Register a new Model' link from the My Models menu in the left-hand navigation menu. To register a new model it needs to be given a model name and (optionally) a small description regarding its type and internal functioning, e.g., Analytic plane-parallel canopy reflectance model with hot spot formulation and Bunnik's Leaf normal distribution function.' You may also provide a published reference to your model if it exists. Note that model names are case-sensitive alphanumeric strings of maximum 10 characters that have to be unique within the context of the ROMC. Avoid using characters like \ / : ' * ? " < > | _ - or blanks in your model name. Because model names are case-sensitive, this means that the model identifier that is included in the filenames of your simulation results therefore must be exactly the same as the one defined when first registering that model in ROMC. A complete list of currently registered models can be found by clicking on the 'model list' button in the My models menu. Note also that the number of models per user is limited to 3. When you have entered all information into the boxes of the model registration page click the 'submit' button to proceed. This will automatically forward you to the model evaluation page.

    The model evaluation page can also be accesses via the 'My model performances' link on the My Models menu in the left-hand navigation menu. Assuming that you have just registered your first model this page will show you a table with four columns and one header line. The link in the header line is your model name; clicking it will allow you to alter the description of your model. The left hand (or first) of the columns in the table will give you the status of the test, i.e., NEW in the case where you just registered this model. The second (from the left) column will indicate the date of starting this particular evaluation check of your model (since you have not started anything yet is will say < not activated yet >). The third column offers you several choices on how to run the ROMC and on what kinds of vegetation canopies.

    The first choice a user has to make is to selected between two different types of "ROMC usage":

    • DEBUG mode is intended for ROMC participants that wish to debug their RT model by running them multiple times over a few specific experiments and/or measurements. To this effect a subset of the various homogeneous/heterogeneous, discrete/turbid medium test cases of previous phases of RAMI are made available here. The various results of these test cases have already been published on-line (see the RAMI website). As such DEBUG mode covers both homogeneous and heterogeneous (turbid and discrete) canopy representations. DEBUG mode will allow you to repeatedly submit your simulations for identical experiments and measurements (that you can select yourself) until you are satisfied with the performance of your model. Note, however, that DEBUG mode does not really qualify as evaluation test of your model since all results are known (and can actually be downloaded in ASCII format). If you wish to obtain a qualified means to evaluate your model use VALIDATE mode instead.
    • VALIDATE mode is intended for ROMC participants that wish to verify the performance of their model against a small set of test cases for which the results are not known a priori. The test cases here are restricted to homogeneous discrete scenarios (HOM), as well as, heterogeneous (HET) discrete floating spheres with slightly different structural, spectral and/or illumination properties than prescribed during previous phases of RAMI. If a user chooses to select VALIDATE mode the ROMC will automatically (and randomly) assign a series of HOMogeneous and/or HETereogeneous test cases to your model. This selection of test cases cannot be changed, unless users submit their results files, in which case a new set of test cases will be automatically (and randomly) selected for the user. Obviously the reference dataset will not be available for download in VALIDATE mode, but the ROMC results do qualify as a means to show the performance of a users model in eventual publications.

    The next choice a user has to make is to selected between different types of "canopy structures":

    • HOM refers to structurally homogeneous canopy scenarios. These test cases may come with finite-sized (discrete), or, infinitesimally small (turbid) foliage representations.
    • HET refers to structurally heterogeneous canopy scenarios, like the floating spheres test cases these may come with finite-sized (discrete), or, infinitesimally small (turbid) foliage representations.
    • HETHOM refers to test cases that may be either structurally homogeneous or heterogeneous.

    Once you have selected you model evaluation preferences, click on the View test cases button in the right-most column of the table and you will be presented with a (two column) table showing the proposed/assigned experiments (left column) and measurements (right column). Each of the measurements and experiments identifiers in this table can be clicked upon to receive a pop-up window with a detailed description of the structural, spectral and illumination setup of the experiment, or, of the exact measurement conditions as well as the formatting requirements for the results.

    • In DEBUG mode you can select any combination of experiments and measurements you wish (by ticking their box, or, if you wish to include all available experiments and measurements by ticking the appropriate 'select all' box at the top of the respective table column). Next you should then click on the Confirm selected test cases button at the bottom of the table to continue evaluating your model against these selected test cases, or, alternatively you may just click on the left-ward pointing triangle at the bottom of the table to return back to the "My model > My model performance" page. Note that in DEBUG mode you do not need to submit all selected measurements for all selected experiments and vice versa.
    • In VALIDATE mode you will be presented with a small set of experiments that were randomly selected form a larger list. You will also be presented with a few measurement types, typically brf1, brfpp, brfop and fabs. Although you may decide not to perform all measurements, whatever measurement you decide to submit, has to be submitted for all experiments in the left-hand column of the table with the proposed test cases. Click on the Confirm button at the bottom of the table if you wish to continue evaluating your model against these test cases, or, alternatively you may just click on the left-ward pointing triangle at the bottom of the table to return back to the "My model > My model performance" page. Note, however, that you will always be presented with exactly the same test cases for a given "canopy structure" type in VALIDATE mode until you have submitted results for them.

    Whether you chose DEBUG or VALIDATE mode, homogeneous or heterogeneous canopy structures (or both): Once you have accepted your selected/assigned test scenes, the ROMC returns you to the "My model > My model performance" page which has changed to indicate:

    • the first column of the model table now features a light-yellow background and reads 'Current test:ACTIVE' instead of NEW. A small add note icon will be visible in it right-hand corner. Clicking on this icon will allow you to insert a small note (up to 80 characters) or comment to clarify the specificities of this submission with respect to previous or subsequent ones. This could be a model version number, a description of the selected experiments/measurements, or, some recent change that you implemented in your model.
    • the second (leftmost) column now features the date of accepting your test cases.
    • the middle right column of the model table in question now features information on the "ROMC usage" (DEBUG or VALIDATE) and the "canopy structure" (HOM, HET, HETHOM) and this can not be altered anymore.
    • the rightmost column now features three options: VIEW Test Scenes to view the detailed descriptions of the measurements and experiments in this assignment, Check Format to verify the formatting of your model simulations (after having run your model on the test cases but) prior to their submission to the ROMC, and SUBMIT Results to submit your (format checked) model simulations to the ROMC.

    To have access to another set of test cases for this model you will first have to submit the results of your simulations using the SUBMIT Results link (located in the rightmost column).

    Assuming, for now, that now you have implemented all selected/assigned test scenes as required (you can always go to their descriptions using the VIEW Test Scenes link in the right hand column of the table) and that you did run your model in accordance with the various ROMC simulation definitions to yield results files that are in line with the ROMC file-naming and formatting rules (you can always check whether your output files are compliant with ROMC by using the Check Format link in the rightmost column of the table), then you can proceed to submit these results by clicking on the SUBMIT Results link located in the rightmost column of the table in the model performance page. To do so you may either submit one single archive file (accepted formats are: .tar, .zip, .jar with the compressions .gz, .bz2, .Z, .tgz), or else-by clicking on the 'Multiple ASCII Files' box to submit individual results files (uncompressed) one by one (this option is only visible if you don't upload more than 44 files) . In both cases you may use the 'browse' button to locate the files that you wish to transfer. When you press the 'send' button we will collect the files, perform a variety of security and formatting test. If you did not implement the correct file-naming scheme - or you did not submit all required measurements files - a 'SUBMISSION ERROR' page will appear. You will have to repeat the submission process - selecting at least one measurement (the same one) per experiment for the submission to work. If the right amount of results files with correct file-naming schemes have been submitted, then the formatting of the content of these files will also be analyzed. If there are any deviations from the ROMC formatting convention (in particular the view zenith and azimuth nomenclature) then this will also give rise to a 'SUBMISSION ERROR' page where the error is explained so that you may fix it.

    If your submission is successful, i.e., all filenames and content formatting is correct and the right number of files was transferred, then you will be informed so via the temporary 'SUBMISSION SUCCESSFUL' page, that will automatically forward you to the initial 'My models > My model performance' page again. Here now the SUBMIT Results link in the rightmost column of the model table has been replaced by a VIEW Results link, and the submission time has also been added to the second column of the table. The first column now reads 'Current test (completed)' and a new row (with the NEW label in the first column) has been inserted in the model table, where you may select the next test conditions for evaluating the performance of your model if you wish. Should you, however, decide to click onto the VIEW Results link you will be presented with the Results page containing a table whose blue and orange colored fields provide links to measurement, experiment and statistical description pages. The various links in the white table fields provide access to various graphical presentations of the agreement between your uploaded results and the reference dataset.

    You may either save these graphical results files (that feature the ROMC watermark and a reference number) as jpeg files directly from your browser, or else, choose to receive them via email, as black&white or colour encapsulated postscript files (.eps) by selecting one or more measurements and statistics (individually, or, via the rows or ALL boxes). Note, however that a maximum of 5 emails with about 1.5 Mbytes each will only be send at any one time (so make use of the size information provided in the rightmost column and bottom row when selecting your results). In DEBUG mode you may also receive the reference data in text form (ASCII). Once you have made your choice click on the 'receive' button to obtain (one or more) emails (sorted per type, i.e., eps, ASCII). Alternatively use the 'Back' button to return to the 'My Models > My Model Performance' page again. You are free to use these results as long as you comply with the ROMC data usage policy.

  5. What is the privacy and data usage policy of ROMC? up

    The authors/developers of the participating RT models maintain full rights on their models. In fact, participation in the ROMC activity does not assume, imply or require any transfer of ownership, intellectual property, commercial rights or software codes between the participants and the ROMC coordinators.

    Registered participants do deliver to the ROMC the results of computations (typically, suitably formatted tables of numbers representing reflectances and other properties of the simulated radiation field), obtained with their own models, for the explicit purpose of comparing them with similarly derived results from a set of 3-D RT models that were identified during previous phases of the RAdiative transfer Intercomparison (RAMI) initiative. These results become the property of the ROMC coordination group, but the latter, in turn, offer the qualified participant the right to use, distribute and to publish the ROMC results pertaining to his/her model (GIF and EPS formats) provided that they are not changed or modified in any way. The ROMC coordinators encourage the publication of VALIDATE mode results and retain the right of interfering if ROMC users try to accredit their models by changing or modifiying ROMC results.

    Furthermore, since the ROMC's DEBUG mode features identical test cases as those that were made publicly available at the end of the third phase of RAMI it should be apparent that DEBUG mode results do not qualify as model validation! Users of the ROMC are therefore advised not to use DEBUG mode results to make public claims regarding the performances of a model. By contrast, results obtained in VALIDATE mode, however, do qualify as a means to document the performance of a given RT model and consequently may be used in publications for that purpose. In fact, one of the goals of the ROMC is precisely to establish a mechanism allowing users, developers and owners of RT models to make the quality of their RT models known to the larger scientific community. By using results obtained from a series of 3D Monte Carlo models, identified during the RAMI community exercise, in order to generate a surrogate truth the ROMC provides an independent and speedy mechanism to evaluate the performance of RT models (in absence of other absolute reference standards). The ROMC thus encourages the usage of the provided VALIDATE mode results graphs in publications.

    Please note, that it is not permissible to modify, change or edit the results provided by the ROMC. This applies to ROMC graphs, statistics, and reference data which must all be used 'as is'. If you choose to include ROMC results in a publication or presentation their source should be acknowledged as follows:

    These results were obtained with the RAMI On-line Model Checker (ROMC) available at http://romc.jrc.ec.europa.eu/ (Widlowski et al., 2007).

    where (Widlowski et al., 2007) - click here for a LaTeX formatted bibliographic reference - refers to:

    Widlowski, J-L., M. Robustelli, M. Disney, J.-P. Gastellu-Etchegorry, T. Lavergne, P. Lewis, P. J. R. North, B. Pinty, R. Thompson, and M. M.Verstraete, 'The RAMI On-line Model Checker (ROMC): A web-based benchmarking facility for canopy reflectance models (2007)', Remote Sensing Environment, 112(3), 1144-1150.
    DOI:10.1016/j.rse.2007.07.016

    Prior to publication the registered ROMC user is encouraged to contact the ROMC coordinators (using romc(dash)webadmin(at)jrc(dot)ec(dot)europa(dot)eu) to ensure the correctness of the received ROMC results plots. Although the ROMC procedure aims at eliminating errors and inconsistencies, no guarantees as to the correctness of the automatically displayed results can be given (in particular if models are tested with functionalities that lie outside the scope of RAMI, e.g., they generate specular peaks, etc.). The responsibility for verifying ROMC results prior to any publication thus lies with the registered ROMC user, and neither the ROMC coordinators nor their institution accept any responsibility for consequences arising from the publication of unverified ROMC results.

    Submitted results are not returned to their originators, they are kept for verification purposes or in case of conflicts arising from unjustified model performance claims. By submitting simulation results to the ROMC coordinators, registered ROMC users authorize the ROMC to analyze these results and compare them to a set of reference data (derived from 3-D Monte Carlo models participating in previous phases of RAMI). The ROMC coordinators will not distribute the outcome of such analysis without the explicit consent of the registered ROMC users in question. Furthermore, the ROMC coordinators will never disclose to anybody (including other registered ROMC users), for any reason, neither the data nor the ROMC results obtained by a registered ROMC user. The sole exception to this is if there are substantiated doubts in the ways by which a registered ROMC user has produced claims to his/her model performance in the scientific literature or in public on the basis of results originating apparently form the ROMC.

    For practical reasons, the copyrights on the ROMC web site, including the text, figures, tables and other graphical, textual or programming elements remains with the ROMC coordinators. Within the legal limits allowed by copyright laws, figures, tables, statistics and other materials published on the ROMC web site can be downloaded and used in other works, provided full and explicit reference to the source materials are duly provided (see above).

    The ROMC coordinators do not take any responsibility for the scientific value or suitability for any particular purpose of the model results submitted in this context. They do however make every effort to ensure the appropriateness, accuracy and fairness of these benchmarks, and provide advice or support to the registered ROMC users as and when needed. Should you find obvious errors and shortcoming, please do not hesitate in communicating these to us such that we may improve this service to other users. Just send an email.

  6. How can I contact the ROMC coordinators? up

    The ROMC coordinator can be contacted via the following email address: romc(dash)webadmin(at)jrc(dot)ec(dot)europa(dot)eu.

  7. Can I use my ROMC results graphics in a publication?  up

    According to the ROMC data usage policy all results submitted to the ROMC become the property of the ROMC coordination group, but the latter, in turn, offer the registered user the right to use, distribute and (preferably in VALIDATE mode only) to publish the ROMC results pertaining to his/her model (GIF and EPS formats) provided that they are not changed or modified in any way. The ROMC coordinators retain the right of interfering if ROMC users try to accredit their models by changing or modifiying ROMC results.

    Furthermore, since the ROMC's DEBUG mode features identical test cases as those that were made publicly available at the end of the third phase of RAMI it should be apparent that DEBUG mode results do not qualify as model validation! Users of the ROMC are therefore advised not to use DEBUG mode results to make public claims regarding the performances of a model. By contrast, results obtained in VALIDATE mode, however, do qualify as a means to document the performance of a given RT model and consequently may be used in publications for that purpose. In fact one of the goals of the ROMC is precisely to establish a mechanism allowing users, developers and owners of RT models to make the quality of their RT models known to the larger scientific community. By using results obtained from a series of 3D Monte Carlo models, identified during the RAMI community exercise, in order to generate a surrogate truth the ROMC provides an independent and speedy mechanism to evaluate the performance of RT models (in absence of other absolute reference standards). The ROMC thus encourages the usage of the provided VALIDATE mode results graphs in publications.

    Please note, that it is not permissible to modify, change or edit the results provided by the ROMC. This applies to ROMC graphs, statistics, and reference data which must all be used 'as is'. If you choose to include ROMC results in a publication or presentation their source should be acknowledged as follows:

    These results were obtained with the RAMI On-line Model Checker (ROMC) available at http://romc.jrc.ec.europa.eu/ (Widlowski et al., 2007).

    where (Widlowski et al., 2007) - click here for a LaTeX formatted bibliographic reference - refers to:

    Widlowski, J-L., M. Robustelli, M. Disney, J.-P. Gastellu-Etchegorry, T. Lavergne, P. Lewis, P. J. R. North, B. Pinty, R. Thompson, and M. M.Verstraete, 'The RAMI On-line Model Checker (ROMC): A web-based benchmarking facility for canopy reflectance models (2007)', Remote Sensing Environment, 112(3), 1144-1150.
    DOI:10.1016/j.rse.2007.07.016

    Prior to publication the registered ROMC user is encouraged to contact the ROMC coordinators (using romc(dash)webadmin(at)jrc(dot)ec(dot)europa(dot)eu) to ensure the correctness of the received ROMC results plots. Although the ROMC procedure aims at eliminating errors and inconsistencies, no guarantees as to the correctness of the automatically displayed results can be given (in particular if models are tested with functionalities that lie outside the scope of RAMI, e.g., they generate specular peaks, etc.). The responsibility for verifying ROMC results prior to any publication thus lies with the registered ROMC user, and neither the ROMC coordinators nor their institution accept any responsibility for consequences arising from the publication of unverified ROMC results.

  8. What are the models that were used in the generation of the reference data?  up

    One of the drawbacks of RT model evaluations is the absence of some absolute reference standard, or, truth. In order to compare the model output against something, a surrogate truth has to be identified. within the ROMC this surrogate truth is generated as the average of a series of 3-D Monte Carlo RT models that were identified during previous phases of the RAdiative transfer Model Intercomparison (RAMI) exercise. The number and name of these models may vary from experiment to experiment and from measurement to measurement. Altogether the following models participated in the generation of the reference data within ROMC:

    1. DART:
      Gastellu-Etchegorry, J.-P., V. Demarez, V. Pinel, and F. Zagolski (1996) 'Modeling Radiative Transfer in Heterogeneous 3-D Vegetation Canopies', Remote Sensing of Environment, 58, 131-156.
    2. Drat:
      Lewis, P. (1999), 'Three-dimensional plant modelling for remote sensing simulation studies using the Botanical Plant Modelling System', Agronomie - Agriculture and Environment ,19,185-210.
    3. FLIGHT:
      North, Peter R. J. (1996) 'Three-Dimensional Forest Light Interaction Model Using a Monte Carlo Method', IEEE Transactions on Geoscience and Remote Sensing, 34, 946-956.
    4. Rayspread:
      Widlowski, J.-L., T. Lavergne, B. Pinty, M. M. Verstraete and N. Gobron (2006) 'Rayspread: A virtual laboratory for rapid BRF simulations over 3-D plant canopies', in Computational Methods in Transport, Lecture Notes in Computational Science and Engineering Series, 48, Springer Verlag, Berlin, 211-231.
    5. Raytran:
      Govaerts, Yves and Michel M. Verstraete (1998) 'Raytran: A Monte Carlo Ray Tracing Model to Compute Light Scattering in Three-Dimensional Heterogeneous Media', IEEE Transactions on Geoscience and Remote Sensing, 36, 493-505.
    6. Sprint:
      Thompson, R. L. and Narendra S. Goel (1998) 'Two Models for Rapidly Calculating Bidirectional Reflectance: Photon Spread (PS) Model and Statistical Photon Spread (SPS) Model', Remote Sensing Reviews, 16, 157-207.

    Here you will find a table showing for every experiment (row) and measurement (column) combination which one(s) of the above reference model candidates have been used to generate the ROMC surrogate truth in DEBUG mode.



Navigation issues with the ROMC:


  1. What is the difference between DEBUG and VALIDATE mode?  up

    In the third column of the model table(s) on the 'My Models > My models performances' page you can choose between two different ROMC usages: DEBUG mode or VALIDATE mode:

    • DEBUG mode is intended for ROMC participants that wish to debug their RT model by running them multiple times over a few specific experiments and/or measurements. To this effect a subset of the various homogeneous/heterogeneous, discrete/turbid medium test cases of previous phases of RAMI are made available here. The various results of these test cases have already been published on-line (see the RAMI website). As such DEBUG mode covers both homogeneous and heterogeneous (turbid and discrete) canopy representations. DEBUG mode will allow you to repeatedly submit your simulations for identical experiments and measurements (that you can select yourself) until you are satisfied with the performance of your model. Note, however, that DEBUG mode does not really qualify as evaluation test of your model since all results are known (and can actually be downloaded in ASCII format). If you wish to obtain a qualified means to evaluate your model use VALIDATE mode instead.
    • VALIDATE mode is intended for ROMC participants that wish to verify the performance of their model against a small set of test cases for which the results are not known a priori. The test cases here are restricted to homogeneous discrete scenarios (HOM), as well as, heterogeneous (HET) discrete floating spheres with slightly different structural, spectral and/or illumination properties than prescribed during previous phases of RAMI. If a user chooses to select VALIDATE mode the ROMC will automatically (and randomly) assign a series of HOMogeneous and/or HETereogeneous test cases to your model. This selection of test cases cannot be changed, unless users submit their results files, in which case a new set of test cases will be automatically (and randomly) selected for the user. Obviously the reference dataset will not be available for download in VALIDATE mode, but the ROMC results do qualify as a means to show the performance of a users model in eventual publications.

    For each one of the ROMC Usage types you will have access to different types of "canopy structures":

    • HOM refers to structurally homogeneous canopy scenarios. These test cases may come with finite-sized (discrete), or, infinitesimally small (turbid) foliage representations.
    • HET refers to structurally heterogeneous canopy scenarios, like the floating spheres test cases these may come with finite-sized (discrete), or, infinitesimally small (turbid) foliage representations.
    • HETHOM refers to test cases that may be either structurally homogeneous or heterogeneous.

    Once you have selected you model evaluation preferences, DEBUG and VALIDATE mode will give you different options/possibilities when clicking on the View test cases button in the right-most column of the table. This button will presented you with a (two column) table showing the proposed/assigned experiments (left column) and measurements (right column). Each of the measurements and experiments identifiers in this table can be clicked upon to receive a pop-up window with a detailed description of the structural, spectral and illumination setup of the experiment, or, of the exact measurement conditions as well as the formatting requirements for the results.

    • In DEBUG mode you can select any combination of experiments and measurements you wish (by ticking their box, or, if you wish to include all available experiments and measurements by ticking the appropriate 'select all' box at the top of the respective table column). Next you should then click on the Confirm selected test cases button at the bottom of the table to continue evaluating your model against these selected test cases, or, alternatively you may just click on the left-ward pointing triangle at the bottom of the table to return back to the "My model > My model performance" page. Note that in DEBUG mode you do not need to submit all selected measurements for all selected experiments and vice versa.
    • In VALIDATE mode you will be presented with a small set of experiments that were randomly selected form a larger list. You will also be presented with a few measurement types, typically brf1, brfpp, brfop and fabs. Although you may decide not to perform all measurements, whatever measurement you decide to submit, has to be submitted for all experiments in the left-hand column of the table with the proposed test cases. Click on the Confirm button at the bottom of the table if you wish to continue evaluating your model against these test cases, or, alternatively you may just click on the left-ward pointing triangle at the bottom of the table to return back to the "My model > My model performance" page. Note, however, that you will always be presented with exactly the same test cases for a given "canopy structure" type in VALIDATE mode until you have submitted results for them.

  2. I have clicked on a link in a pop-up window. How do I go back? up

    If there is a pop up window which itself contains a link, and you followed that link, then it is possible to go back to the original pop-up window content using the right mouse button. Clicking it will get you a menu from which you should choose 'Back'. Alternatively with windows explorer the 'Backspace' key on your keyboard will do the same.

  3. Is there a way to add a note about my model's version when submitting results to the ROMC? up

    yes, this can be done by clicking on the add note image in the rightmost table row. A pop-up window will allow you to write a small text note (80 characters) to clarify the specificities of this submission with respect to previous or subsequent ones (rather than having to look at the results graphs themselves). This comment or note could, for example, be a version number of the model, a different set of test cases or an updated multiple scattering formulation. When finished typing click on the Insert text button, and then on 'close' to close the pop-up window. The text will be added in small font in the lower right hand side of the rightmost cell in the model table of origin in the 'My Model Performance' page. You change the text by clicking on it and the same pop-up window will re-appear.

  4. What does the 'My Models > Model Comparison' link do? up

    The 'My models > Model Comparison' link in the left hand side navigation panel allows you to compare the results of several of your ROMC submissions against each other. This link will bring you to the first of three pages required before ROMC will generate results graphs of the ROMC submission that you wish to plot against each other:

    • STEP 1: select the 'ROMC usage type' first (DEBUG or VALIDATE), and then the 'canopy structure type' (HOM, HET or HETHOM). Clicking on the 'Show available experiments/measurements' will present you with a page that features all the ROMC experiments and measurements belonging to the selected 'ROMC usage' and 'canopy structure' types.
    • STEP 2:select the experiments (left) and measurements (right) that you are interested in, using the 'shift' key and the mouse button to select adjacent entries and the 'Ctrl' key and the mouse button to select separate entries in these lists. When you click on the 'Find my ROMC submissions that include ALL these experiments/measurements' button the ROMC will check if one of all the ROMC submissions that you performed included all of the measurements and experiments that you selected. If none of your ROMC submission contains all selected measurements and experiments the ROMC will tell you and you have to change your selection.
    • STEP 3: select which ones of your models and submissions to the ROMC you wish to include in the subsequent ROMC results graphics (to do so tick the corresponding table column entitled 'one or more ROMC datasets'). By default your ROMC submissions will be compared against the ROMC reference dataset (last column in table). You can change this by ticking one entry in the second table column entitled 'one reference ROMC dataset. Note however, that a given ROMC submission cannot be reference and model at the same time. Finally click the 'Generate Intercompariosn results' button.

    A results page with various results graphics will be generated that now feature all the ROMC submissions you selected (an example is provided below). If you chose different ROMC submissions of the same model. Then a pop-up legend will appear that relates the model name colour to the actual submission time at which that ROMC submission was performed. All graphs can be received in postscript form by selecting the graphs of inteerst and clicking on the 'send' button at the bottom of the page.

  5. What does the 'My Models > Model Skill' link do? up

    The 'My models > Model Skill' link in the left hand side navigation panel allows you to generate a graph with the skill values of one or more of your ROMC submissions. This will then bring you to the first of two pages required before ROMC will generate the skill graphs you want:

    • STEP 1: select the 'ROMC usage type' first (DEBUG or VALIDATE), and then the 'canopy structure type' (HOM, HET or HETHOM). Clicking on the 'Show available experiments/measurements' will present you with a page that features all your ROMC experiments and measurements for which skill values are available and that belong to the selected 'ROMC usage' and 'canopy structure' types.
    • STEP 2: click on the measurements (top row) which you are interested to include in the final skill graph. Next select tyhe models and/or model submissions you wish to include in the resulting skill graph. Note that if a +/- appears in a table cell below one of your model names, this means that more than one ROMC submission of this model were performed. Clicking on the +/- will expand the table to visualise all ROMC submissions of this model. Once you have selected all model and/or ROMC submissions and measurements you wish to include in the skill graphs, click on the 'Create graph of selected model skill values'

    The result is a graph showing model skill (on a logarithmic y axis) - where skill=100 is perfect match and skill=0 is worst case scenario - for each of the measurements you selected. If you chose different ROMC submissions from the same model then a legend graph will be shown that relates the model colour to the actual submission time at which that ROMC submission was performed. The size of the plotting symbols is related to the number of experiments performed for any given ROMC submission. All graphs can be received in postscript form by selecting the graphs of inteerst and clicking on the 'send' button at the bottom of the page.

  6. I did register myself but I do not get a new left-hand menu when logging in? up

    If this happens than it may well be that you have not enable cookies on your browser. To enable cookies on Mozilla firefox, for example, you should go to: 'edit > preferences > cookies > allow sites to set cookies' and click 'OK'. Different browsers may have different paths to follow to allow for cookies. In case of difficulties ask your IT support. If the enabling of cookies does not solve the issue then please send an email to the ROMC coordinators using the following email address romc(dash)webadmin(at)jrc(dot)ec(dot)europa(dot)eu.




Registration issues with the ROMC:


  1. I did register myself but did not get a confirmation email? up

    If this happens than it may well be that you mis-spelled your email address. Sometimes our email to you is blocked by a spam blocking software at your end. If this happens please send an email to the ROMC coordinators using the following email address romc(dash)webadmin(at)jrc(dot)ec(dot)europa(dot)eu.

  2. I am trying to register a new model but it does not work? up

    If this happens then it may well be that you have already registered three other models. Currently the maximum number of models per user is limited to 3. If you wish to register additional models please send an email to the ROMC administrator using the following email address romc(dash)webadmin(at)jrc(dot)ec(dot)europa(dot)eu.

    Alternatively, if you receive the message 'Model name exists already!' when trying to register a new model this means that someone else has chosen to register that case-sensitive alphanumeric string within ROMC. Your only option at that stage is to find another model name, for example, by capitalizing some letters, or by adding your initials (a year date, or, version number) at the end of the model name. Example if 'Modtran' exists already try 'MODTRAN', 'modtranJS' or 'modtran2'. To facilitate the selection of model names, note that model names are case-sensitive alphanumeric strings of maximum 10 characters that have to be unique within the context of the ROMC. Avoid using characters like \ / : ' * ? " < > | _ - or blanks in your model name. Because model names are case-sensitive, this means that the model identifier that is included in the filenames of your simulation results therefore must be exactly the same as the one defined when first registering that model in ROMC. A complete list of currently registered models can be found by clicking on the 'model list' button in the My models menu.

  3. Can I participate with multiple RT models? up

    The participation with multiple models is permissible up to a number of three. If you wish to register more than 3 models you must contact the ROMC coordinators using the following email address: romc(dash)webadmin(at)jrc(dot)ec(dot)europa(dot)eu. Note that model names are case-sensitive alphanumeric strings of maximum 10 characters that have to be unique within the context of the ROMC. Avoid using characters like \ / : ' * ? " < > | _ - or blanks in your model name. Because model names are case-sensitive, this means that the model identifier that is included in the filenames of your simulation results therefore must be exactly the same as the one defined when first registering that model in ROMC.

  4. Can I provide additional information about my model? up

    Yes. During the registration process of your model you may be rather descriptive in terms of outlining the functioning as well as selected applications of your model. You may provide references and a contact address. Even after registering a new model it is always possible to click on a models name (on the 'My models > My models performance' page accessible from the menu in the left-hand navigation menu) to edit the current model description.

  5. What if my model name has already been used? up

    If you receive the message 'Model name exists already!' when trying to register a new model this means that someone else has chosen to register that case-sensitive alphanumeric string within ROMC. Your only option at that stage is to find another model name, for example, by capitalizing some letters, or by adding your initials (a year date, or, version number) at the end of the model name. Example if 'Modtran' exists already try 'MODTRAN', 'modtranJS' or 'modtran2'. To facilitate the selection of model names, note that model names are case-sensitive alphanumeric strings of maximum 10 characters that have to be unique within the context of the ROMC. Avoid using characters like \ / : ' * ? " < > | _ - or blanks in your model name. Because model names are case-sensitive, this means that the model identifier that is included in the filenames of your simulation results therefore must be exactly the same as the one defined when first registering that model in ROMC. A complete list of currently registered models can be found by clicking on the 'model list' button in the My models menu. Note also that the number of models per user is limited to 3.



Formatting issues with the ROMC:


  1. Can I submit measurement files in other formats than plain text? up

    No. When submitting individual results files they will have to be text files (ASCII). If you choose to submit one single archive file (whether compressed or not) it must contain ASCII results files. The ROMC will reject all other file formats apart form plain text. Please ensure - prior to submission - that your results files adhere to the file naming and formatting convention. Similarly, results files (.mes) themselves should not be compressed.

  2. Do I need to adhere to the file naming convention? up

    Yes. The file-naming convention is mandatory since it allows the automatic processing of the submitted results. Filenames that do not adhere to the prescribed file naming convention will result in the ROMC processing to be aborted. Thus, prior to the sending of any results, you are strongly encouraged to perform an on-line format check using the Check Format on the 'My Models > My model performances' page, to see if your model has produced correctly named and formatted result-files. Note that because model names are case-sensitive, this means that the model identifier that is included in the filenames of your simulation results therefore must be exactly the same as the one defined when first registering that model in ROMC.

  3. What format do my model simulation result files have to be in? up

    All submitted simulation files must be of type ASCII (plain text). Detailed information as to the precise formatting (header line, columns and rows) of the results files can be found here. You may also consult the measurements definition pages directly for examples of results files. Of particular importance here are the angular sign conventions. Adherence to these format conventions is mandatory since otherwise the ROMC will reject the submitted files.

  4. How many of the prescribed test cases do I need to generate with my model? up

    The short answer to this question is: all that you select. However, the ROMC will only complain about missing experiments in VALIDATE mode. In DEBUG mode it is you that chooses the number of experiments, and even if you do not submit all of them, the ROMC will continue. In VALIDATE mode, on the other hand, the number of experiments is prescribed (usually four) and the ROMC will make sure that you submit each one of them. Therefore, make sure that, when you are selecting your model evaluation options in VALIDATE mode, you choose a "canopy structure" type that is actually feasible with your model at hand: Currently you may choose amid 1) only structurally homogeneous environments that can be represented by 1-D and 3-D RT models, 2) only structurally heterogeneous environments that can only be represented with 3-D RT models, and 3) a selection of both structurally homogeneous and heterogeneous scenes (floating spheres only in VALIDATE mode) that should be attempted only by 3-D RT models. Also, please make sure that you adhere to the ROMC simulation definitions and ROMC formatting convention.

  5. How many of the prescribed measurements do I need to simulate with my model? up

    Again, the answer to this question is: all of those that your model is capable of simulating. If there are some RAMI experiments that you are unable to perform (in VALIDATE mode for example) never mind. Whatever measurements you choose to submit in VALIDATE mode must, however, be submitted for all selected experiments though. In DEBUG mode you may submit whatever combination of experiments and measurements you wish.

    Please make sure that you utilize the proper ROMC definitions regarding angular sign conventions, leaf normal distributions, and other RT model technicalities prior to starting your simulations. Also read the relevant file naming and formatting conventions that must be adhered to by all participants.




Submission issues with the ROMC:


  1. What if my model cannot exactly simulate the structure of a proposed test case? up

    Canopy reflectance models are developed to simulate the reflectance fields of existing types of vegetation architecture. The choice of representations of such vegetation canopies is primarily related to the dimensionality of the RT model, but also to the computational constraints at the time of its conception, as well as the potential field(s) of application. Particularly for the panoply of existing 3-D RT models the scene representation strategies may vary widely. Beyond the actual performance of a particular model it is thus of interest to determine what the actual limits and strengths of these different scene implementations within RT models are. Hence for any RT model that is using the ROMC to evaluate itself, the goal should be to represent the prescribed geophysical environments to the best of its abilities. This means that within the constraints provided by the canopy formulation of that model, the structural properties of a given scene should be described as faithfully as possible to the original description (provided on the corresponding HTML pages). For some sophisticated models this may mean to use (in DEBUG mode) the (optional) ASCII files that we provide to reproduce exactly the location and orientation of all individual scatterers in the scene. Other models, may do well by generating their scenes in a manner that is suitable for their own internal formalism, provided that structural parameters like canopy height, leaf area index, leaf area density, etc. are respected at least at the level of the scene if not at the level of every individual object. For best simulation results try also to adhere as close as possible to the ROMC simulation definitions.

  2. What if my model cannot generate all of the required measurements? up

    ROMC Participants are strongly encouraged to simulate all prescribed measurements within the capabilities of their model. In DEBUG mode you may submit any combination of experiments and measurements. In VALIDATE mode, however, the ROMC is strict in enforcing that whatever the measurement type chosen it is submitted for all of the presented experiments (typically 4). Thus it is possible to only select the brfop measurement type, for example, and to submit four files only ( 1 per experiment), or, alternatively to select all 11 measurement types and submit 4*11=44 results files to the ROMC (in VALIDATE mode).

  3. How do I submit my model simulation results? up

    The first step to check the performance of your RT model is to register it. This can be done by selecting the 'Register a new Model' link from the My Models menu in the left-hand navigation menu. To register a new model it needs to be given a model name and (optionally) a small description regarding its type and internal functioning, e.g., Analytic plane-parallel canopy reflectance model with hot spot formulation and Bunnik's Leaf normal distribution function.' You may also provide a published reference to your model if it exists. Note that model names are case-sensitive alphanumeric strings of maximum 10 characters that have to be unique within the context of the ROMC. Avoid using characters like \ / : ' * ? " < > | _ - or blanks in your model name. Because model names are case-sensitive, this means that the model identifier that is included in the filenames of your simulation results therefore must be exactly the same as the one defined when first registering that model in ROMC. A complete list of currently registered models can be found by clicking on the 'model list' button in the My models menu. Note also that the number of models per user is limited to 3. When you have entered all information into the boxes of the model registration page click the 'submit' button to proceed. This will automatically forward you to the model evaluation page.

    The model evaluation page can also be accesses via the 'My model performances' link on the My Models menu in the left-hand navigation menu. Assuming that you have just registered your first model this page will show you a table with four columns and one header line. The link in the header line is your model name; clicking it will allow you to alter the description of your model. The left hand (or first) of the columns in the table will give you the status of the test, i.e., NEW in the case where you just registered this model. The second (from the left) column will indicate the date of starting this particular evaluation check of your model (since you have not started anything yet is will say < not activated yet >). The third column offers you several choices on how to run the ROMC and on what kinds of vegetation canopies.

    The first choice a user has to make is to selected between two different types of "ROMC usage":

    • DEBUG mode is intended for ROMC participants that wish to debug their RT model by running them multiple times over a few specific experiments and/or measurements. To this effect a subset of the various homogeneous/heterogeneous, discrete/turbid medium test cases of previous phases of RAMI are made available here. The various results of these test cases have already been published on-line (see the RAMI website). As such DEBUG mode covers both homogeneous and heterogeneous (turbid and discrete) canopy representations. DEBUG mode will allow you to repeatedly submit your simulations for identical experiments and measurements (that you can select yourself) until you are satisfied with the performance of your model. Note, however, that DEBUG mode does not really qualify as evaluation test of your model since all results are known (and can actually be downloaded in ASCII format). If you wish to obtain a qualified means to evaluate your model use VALIDATE mode instead.
    • VALIDATE mode is intended for ROMC participants that wish to verify the performance of their model against a small set of test cases for which the results are not known a priori. The test cases here are restricted to homogeneous discrete scenarios (HOM), as well as, heterogeneous (HET) discrete floating spheres with slightly different structural, spectral and/or illumination properties than prescribed during previous phases of RAMI. If a user chooses to select VALIDATE mode the ROMC will automatically (and randomly) assign a series of HOMogeneous and/or HETereogeneous test cases to your model. This selection of test cases cannot be changed, unless users submit their results files, in which case a new set of test cases will be automatically (and randomly) selected for the user. Obviously the reference dataset will not be available for download in VALIDATE mode, but the ROMC results do qualify as a means to show the performance of a users model in eventual publications.

    The next choice a user has to make is to selected between different types of "canopy structures":

    • HOM refers to structurally homogeneous canopy scenarios. These test cases may come with finite-sized (discrete), or, infinitesimally small (turbid) foliage representations.
    • HET refers to structurally heterogeneous canopy scenarios, like the floating spheres test cases these may come with finite-sized (discrete), or, infinitesimally small (turbid) foliage representations.
    • HETHOM refers to test cases that may be either structurally homogeneous or heterogeneous.

    Once you have selected you model evaluation preferences, click on the View test cases button in the right-most column of the table and you will be presented with a (two column) table showing the proposed/assigned experiments (left column) and measurements (right column). Each of the measurements and experiments identifiers in this table can be clicked upon to receive a pop-up window with a detailed description of the structural, spectral and illumination setup of the experiment, or, of the exact measurement conditions as well as the formatting requirements for the results.

    • In DEBUG mode you can select any combination of experiments and measurements you wish (by ticking their box, or, if you wish to include all available experiments and measurements by ticking the appropriate 'select all' box at the top of the respective table column). Next you should then click on the Confirm selected test cases button at the bottom of the table to continue evaluating your model against these selected test cases, or, alternatively you may just click on the left-ward pointing triangle at the bottom of the table to return back to the "My model > My model performance" page. Note that in DEBUG mode you do not need to submit all selected measurements for all selected experiments and vice versa.
    • In VALIDATE mode you will be presented with a small set of experiments that were randomly selected form a larger list. You will also be presented with a few measurement types, typically brf1, brfpp, brfop and fabs. Although you may decide not to perform all measurements, whatever measurement you decide to submit, has to be submitted for all experiments in the left-hand column of the table with the proposed test cases. Click on the Confirm button at the bottom of the table if you wish to continue evaluating your model against these test cases, or, alternatively you may just click on the left-ward pointing triangle at the bottom of the table to return back to the "My model > My model performance" page. Note, however, that you will always be presented with exactly the same test cases for a given "canopy structure" type in VALIDATE mode until you have submitted results for them.

    Whether you chose DEBUG or VALIDATE mode, homogeneous or heterogeneous canopy structures (or both): Once you have accepted your selected/assigned test scenes, the ROMC returns you to the "My model > My model performance" page which has changed to indicate:

    • the first column of the model table now features a light-yellow background and reads 'Current test:ACTIVE' instead of NEW. A small add note icon will be visible in it right-hand corner. Clicking on this icon will allow you to insert a small note (up to 80 characters) or comment to clarify the specificities of this submission with respect to previous or subsequent ones. This could be a model version number, a description of the selected experiments/measurements, or, some recent change that you implemented in your model.
    • the second (leftmost) column now features the date of accepting your test cases.
    • the middle right column of the model table in question now features information on the "ROMC usage" (DEBUG or VALIDATE) and the "canopy structure" (HOM, HET, HETHOM) and this can not be altered anymore.
    • the rightmost column now features three options: VIEW Test Scenes to view the detailed descriptions of the measurements and experiments in this assignment, Check Format to verify the formatting of your model simulations (after having run your model on the test cases but) prior to their submission to the ROMC, and SUBMIT Results to submit your (format checked) model simulations to the ROMC.

    To have access to another set of test cases for this model you will first have to submit the results of your simulations using the SUBMIT Results link (located in the rightmost column).

    Assuming, for now, that now you have implemented all selected/assigned test scenes as required (you can always go to their descriptions using the VIEW Test Scenes link in the right hand column of the table) and that you did run your model in accordance with the various ROMC simulation definitions to yield results files that are in line with the ROMC file-naming and formatting rules (you can always check whether your output files are compliant with ROMC by using the Check Format link in the rightmost column of the table), then you can proceed to submit these results by clicking on the SUBMIT Results link located in the rightmost column of the table in the model performance page. To do so you may either submit one single archive file (accepted formats are: .tar, .zip, .jar with the compressions .gz, .bz2, .Z, .tgz), or else-by clicking on the 'Multiple ASCII Files' box to submit individual results files (uncompressed) one by one (this option is only visible if you don't upload more than 44 files) . In both cases you may use the 'browse' button to locate the files that you wish to transfer. When you press the 'send' button we will collect the files, perform a variety of security and formatting test. If you did not implement the correct file-naming scheme - or you did not submit all required measurements files - a 'SUBMISSION ERROR' page will appear. You will have to repeat the submission process - selecting at least one measurement (the same one) per experiment for the submission to work. If the right amount of results files with correct file-naming schemes have been submitted, then the formatting of the content of these files will also be analyzed. If there are any deviations from the ROMC formatting convention (in particular the view zenith and azimuth nomenclature) then this will also give rise to a 'SUBMISSION ERROR' page where the error is explained so that you may fix it.

    If your submission is successful, i.e., all filenames and content formatting is correct and the right number of files was transferred, then you will be informed so via the temporary 'SUBMISSION SUCCESSFUL' page, that will automatically forward you to the initial 'My models > My model performance' page again. Here now the SUBMIT Results link in the rightmost column of the model table has been replaced by a VIEW Results link, and the submission time has also been added to the second column of the table. The first column now reads 'Current test (completed)' and a new row (with the NEW label in the first column) has been inserted in the model table, where you may select the next test conditions for evaluating the performance of your model if you wish. Should you, however, decide to click onto the VIEW Results link you will be presented with the Results page containing a table whose blue and orange colored fields provide links to measurement, experiment and statistical description pages. The various links in the white table fields provide access to various graphical presentations of the agreement between your uploaded results and the reference dataset.

    You may either save these graphical results files (that feature the ROMC watermark and a reference number) as jpeg files directly from your browser, or else, choose to receive them via email, as black&white or colour encapsulated postscript files (.eps) by selecting one or more measurements and statistics (individually, or, via the rows or ALL boxes). Note, however that a maximum of 5 emails with about 1.5 Mbytes each will only be send at any one time (so make use of the size information provided in the rightmost column and bottom row when selecting your results). In DEBUG mode you may also receive the reference data in text form (ASCII). Once you have made your choice click on the 'receive' button to obtain (one or more) emails (sorted per type, i.e., eps, ASCII). Alternatively use the 'Back' button to return to the 'My Models > My Model Performance' page again. You are free to use these results as long as you comply with the ROMC data usage policy.

  4. I submitted my results in a single archive file that is being rejected. What did I do wrong? up

    When submitting your results in a single archive file this file has to be generated using the codes: ZIP, TAR and JAR. If furthermore you wish to compress this archive you may use the softwares GZIP, BZIP2 and COMPRESS. Note that it is possible to include directories within the archive file, however, your model submission (.mes) files must be uncompressed ASCII files. The table below provides you with an overview of accepted combinations of archive and compression softwares (the name of the archive here is archive.*).

    Code to generate
    archive file
    Code to compress archive file
    No compression gzip bzip2
    zip
    zip archive.Z *.mes

    gzip archive.Z

    bzip2 archive.Z
    tar
    tar c *.mes -f archive.tar
    see the note below

    gzip archive.tar
    tar c *.mes -z -f archive.tgz

    bzip2 archive.tar
    tar c *.mes -j -f archive.tbz
    jar
    jar cf archive.jar *.mes

    gzip archive.jar

    bzip2 archive.jar

    Note that if the size of the archive that you try to upload is larger than about two megabytes, a pop-up window will appear stating that the file contains no data.This may happen, for example, if you try to upload all DEBUG mode results at once, in particular, when using the uncompressed tar archive option. In this case we suggest you to compress the archive to resend it. Even a gzipped tar archive of all HOM and HET DEBUG mode experiments (and all measurements) should, however, not exceed the 2 Mb limit.

    If you believe to have adhered to all these rules but are still unable to upload your data, please send an email to the ROMC coordinators (attaching the archive file - if it is not bigger than 2Mb) and explain the problem.

  5. Explain the various options to speed up ROMC submissions? up

    On the ROMC page were you are browsing your computer to find either a single archive file, or , a selection of individual simualtion files, you are offered the possibility to select one or more options that speed up the ROMC processing. These options result in fewer output graphs (and images) to be generated. The following options are available:

    • ONLY COLOR: By default ROMC generates colour graphics that are displayed to the user. Many graphs are, however, also generated in a black&white version (not displayed to the user) that is send to the user - if so requested - via email. By clicking the 'ONLY COLOR' options no black & white graphs are generated.
    • ONLY GLOBAL: If more than 2 experiments are submitted the ROMC generated 'global' plots containing the results from all experiments on a single plot. These are the results thatare displayed in the 'All Experiments' column at the bottom of the results table. By clicking the 'ONLY GLOBAL' option ROMC will not generate graphical results for individual experiments but only for all experiments together in one single plot (per measurement).
    • NO DATA: If set the ROMC will not generate the results graphs known as BRF data PLOTS that show the BRF of the user submitted data together with that of the ROMC reference (and a series of enveloppes around these)
    • NO 1TO1: If set the ROMC will not generate the results graphs known as BRF 1TO1 PLOTS that show the user submitted data plotted against the ROMC reference data
    • NO HIST: If set the ROMC will not generate the results graphs known as BRF difference histograms that show a histogram of the differences between the user submitted data and the ROMC reference data
    • NO FLUX: If set the ROMC will not generate the results graphs for the flux measurements (brf1, ftran, fabs) known as Flux difference histograms that show a barchart with the flux differences between the user submitted data and the ROMC reference data
    • NO CHI2: If set the ROMC will not generate the chi-square results graphs.
    • NO TAYLOR: If set the ROMC will not generate the Taylor diagrams.

    Selecting all speed-up options will result in no output being generated.




Results issues with the ROMC:


  1. Explain the various ROMC results graphs to me?  up

    The ROMC results page - accessible via the VIEW Results link on the 'My Models > My model performances' page - provides access to your model simulation results via the various links in the white fields of the results table. All but the last row in the (white section of that) table provide access to model results that pertain to a single experiment only. The last (white) row of the results table, on the other hand, provides graphs giving a synoptic overview of the model performance over all experiments. The columns of the results table relate to individual measurements (brfpp, brfop, ftran, fabs, etc) as well as the Χ2 statistics (which is described in greater detail here). In addition to the Χ2 statistics there are four different types of graphs available that describe the performance of the model tested:

    • BRF data plots show your data in relation to the reference data across the requested view zenith angle range (in degrees) for the orthogonal or principal plane. In addition the 2.5%, 5% and 7.5% deviation envelopes from the reference data are also indicated (using three different shades of gray).
    • BRF 1 to 1 plots show your BRF data plotted against the reference data. This plot type is generated both for individual experiments and for all submitted experiments together (global). In the latter case the number of experiment is indicated. It also contains the 1 to 1 line, the root mean square (RMS) error:

      and the signal-to-noise ratio (SNR):
    • BRF difference histograms are generated on the basis of the absolute difference between the reference BRF and your model generated BRFs. This plot type is generated both for individual experiments and for all experiments together (global). In the latter case the number of experiment is indicated. The absolute difference between the reference and the model generated data is multiplied by a factor 100 on the x-axis. The root mean square (RMS) error is indicated:

    • Flux difference plots are generated for the fabs, ftran and brf1 measurements only. The differences of all three of these fluxes with respect to their corresponding reference data is shown in one single graph. This plot type is generated both for individual experiments and for all experiments together (global). In the latter case the number of experiment is indicated.

    Examples of individual ROMC plot types for DEBUG mode
    BRF data plot BRF 1 to 1 plot BRF diff. histogram Flux difference plot
    Example of a ROMC data plot in DEBUG mode for a single experiment Example of a ROMC 1to1 BRF plot in DEBUG mode for a single experiment Example of a ROMC difference histogram in DEBUG mode for a single experiment Example of a ROMC flux difference plot in DEBUG mode for a single experiment
    Examples of global ROMC plot types
    BRF 1:1 plot (DEBUG) diff. histogram (DEBUG) Flux difference (VALIDATE) BRF 1:1 plot (VALIDATE)
    Example of a ROMC 1to1 plot in DEBUG mode for all submitted experiment Example of a ROMC difference histogram in DEBUG mode for all submitted experiment Example of a ROMC flux difference plot in  VALIDATE mode for all submitted experiment Example of a ROMC 1to1 plot in VALIDATE mode for all submitted experiment

    Click on images in table to view enlarged version.

  2. I cannot see the graphs of my ROMC results? up

    The various ROMC graphs with your results appear as pop-up windows if you click on the corresponding links in the table of the ROMC results page. It may well be that you have disabled pop-up windows for your browser. If that happens you can usually see a bar appearing at the upper part of your browser's main window. Follow the instructions there to allow pop-up windows for the ROMC site.

  3. How is the Taylor diagram constructed? up

    The Taylor diagram provides a concise statistical summary of how well patterns match each other in terms of their correlation, their root-mean-square difference, and the ratio of their variances (Karl. E. Taylor, 'Summarizing multiple aspects of model performance in a single diagram', Journal of Geophysical Research, Vol 106, No. D7, pages 7183-7192, April 2006). Despite their advantages one should bear in mind that Taylor diagrams do not allow to differentiate between two datasets that differ only by a constant offset. We remedy this somewhat by varying the size of the plotting symbols in the Taylor diagrams, but only if no negative correlations occured, and, the absolute difference between the model and reference mean differs by more than 3 percent from the mean of the reference data. Nevertheless, it is recommended to cross-check model performance with the skill value of a given model.

    In essence it is based on the relationship between three commonly used statistcial measures:

    1. the correlation coefficient (R) between N datapoints of the users dataset (Xusr) and the ROMC reference dataset (Xref). R is the statistics used most often to quantify 'pattern similarity' and is defined as:


      However, for a correlation coefficient equal to 1 it is not possible to determine whether two patterns have the same amplitude of variation (as determined for example, by their variances).

    2. the Root-Mean_Square (RMS) difference between between N datapoints of the users dataset (Xusr) and the ROMC reference dataset (Xref).


      The total RMS can be separated into the quadratic sum of a contribution due to the 'overall bias' and another due to the 'centered pattern RMS difference':


      where the RMS contribution due to the 'overall bias' between the mean values of both datasets is:


      and the one of interest here, the 'centered pattern RMS differences' is defined as:


      The pattern of RMS differences approaches zero as two patterns become more alike, but for any given value of RMS* it is impossible to determine how much of the error is due to a difference in structure and phase and how much is simply due to a difference in amplitude of the variations.

    3. the standard deviation provides an indication of the amplitude of variations in a given dataset. To compare different datasets on one single Taylor diagram it is customary to normalised the standard deviation of the user's dataset by the standard deviation of the reference dataset:


    It can be shown that the RMS difference is related to the correlation coefficient, and the standard deviations of both the user and reference datasets in a similar manner as is expressed by the law of cosines for the relationship between two adjacent sides of a triangle together with the angle between them, and the third (opposite) side of the triangle:


    When using normalised standard deviations this reduces to:


    This allows to construct a a diagram that statistically quantifies the degree of similarity between two fields. In our case, one of the fields will be ROMC reference dataset, and the other the dataset generated by a user's model. The aim is to quantify how closely the user's dataset resembles the ROMC reference dataset. In the figure below several points are plotted on a polar style graph with the black lozange (plotted along the abscissa) representing the reference solution and the circles representing the user's results for different measurement types (colour coding). The radial distances from the origin to the points (dotted circular arcs) are proportional to the normalised standard deviation (that is the ratio of the standard deviation of the users data to that of the reference data). As a consequence the black lozange (ROMC reference) is at a distance of 1 from the origin. The azimuthal position of the datapoints in the graph gives the correlation coefficient, R between a dataset and the ROMC reference data (labels of correlation coefficient are given along the outer circular rim of the graph). One can see that the azimuthal distance of the ROMC reference data (black lozange) corresponds to a perfect correlation (R=1). The dashed lines measure the distance from the reference data and correspond to the RMS difference between two datasets (once any overall bias has been removed). In the figure below one can see that the red dot features a larger variance than the reference solution (radial distance from origin is larger). Its correlation with the reference solution (black lozange) is higher than that of the blue and mauve points, which in turn, exhibit a smaller variance than the reference dataset (smaller radial distance to origin). The RMS difference (dashed circular rings) is very similar for all three points with the mauve one having the smallest value of RMS*.


  4. Can you explain the meaning of model SKILL? up

    The SKILL of a model is an integrated indicator describing the overall closeness of the simulations of a model to the ROMC reference data set. The SKILL of a model is computed for every measurement type (brfpp, brfop, ftran, brf1, ftran, etc.) provided that at least 3 flux (1 BRF) experiments were performed for this measurement type. A skill metric should behave as follows:

    • Assuming identical dataset means then for any given variance the skill score increases with increasing correlation,
    • Assuming identical dataset means then for any given correlation the skill score increases as the modelled variance approaches the variance of the ROMC reference dataset.
    • For any given variance and correlation the skill score increases as the mean of the modelled dataset approaches that of the ROMC reference dataset, and

    The ROMC skill scores are defined from zero (least skillful) to 100 (most skillful). More specifically:

    where S is a metric that evaluates the impact of 1) pattern similarity (correlation), 2) data variability (variance) and 3) differences between the mean of the data. S was defined by adapting equation 5 of Karl. E. Taylor, 'Summarizing multiple aspects of model performance in a single diagram', Journal of Geophysical Research,Vol 106, No. D7, pages 7183-7192, April 2006, to account for datasets that have identical correlation and variance but that are offset by some fixed vaue. S is given by:

    Here R is the correlation coefficient between N datapoints of the users dataset (Xusr) and the ROMC reference dataset (Xref). R is defined as:


    &sigma^ is the standard deviation of the users dataset (&sigmausr) normalised by that of the ROMC reference dataset (&sigmaref):


    x^ is the mean of the users dataset (x-usr) normalised by the mean of the ROMC reference dataset (x-ref):


    Finally R0 is the maximum attainable correlation. Since this is set to: 1.0000 it is possible to rewrite S simply as:


    SKILL values are derived from the number of submitted experiments. Obviously the representability of the SKILL value is related to the number of submitted experiments. Note also that the skill value is not defined in the case of a 'flat' (&sigmausr=0, or &sigmaref=0) dataset, or a dataset with a mean value of zero.

  5. How is the Χ2 statistics on the results page computed? up

    For each one of the ROMC measurement types the Χ2 statistics can be computed as the normalized sum of the ratio of the squared differences between the model and the reference data, and the square of an uncertainty estimate of the reference data, i.e.,

    up

    where N_theta_v is the number of view zenith angles, and f has been set to 0.03 such that models with a Χ2 less than unity lie within 3% of the reference data. In the case of the fabs, ftran and brf1 measurements the averaging is performed with respect to N_theta_v only (and not with respect to N_theta_v - 1 as is the case for all brfop and brfpp measurements).

    Note that for BRF values < 0.0001 the Χ2 values may become very large, even if your models data fits visually very well. In that case it is recommended to use the root-mean-square (RMS) error statistics displayed in the 1:1 plots.

    Below is an example of a graph showing the Χ2 statistics (one value per measurement) when averaged over all model submitted simulations for all experiment together (2 in this case):

    Example of ROMC Chi2 plot


  6. The title of the y-axis in my postscript files is not or only partially visible? up

    It sometimes happens that the maximum BRF values are smaller than 0.001 in which case the y-axis (and x-axis) labeling is changed to the format x.xx e-y. This may mean that the title of the y-axis is not visible in the displayed eps graphics. To remedy this, open the encapsulated postscript (.eps) file with an editor like wordpad, vi or nedit, for example, and change all occurrences of %%BoundingBox (usually the second and 39-th line in the .eps file) such that the first number following %%BoundingBox becomes negative, e.g.,. from:

    %%BoundingBox: 0 113 595 708
    
    to something like:
    %%BoundingBox: -25 113 595 708
    
    This should bring the y-axis title within the visible range of the eps graph.


  7. My ROMC results graphics appear incorrect. What can I do? up

    If you have the feeling that some of your results plots are incorrect (e.g., the hot spot is located at positive zenith angles instead of negative) please verify first that you have adhered to the correct angular sign conventions for both the view azimuth and zenith angles, prior to sending an email to romc(dash)webadmin(at)jrc(dot)ec(dot)europa(dot)eu.

    Although every possible effort is undertaken to ensure the correct plotting of user submitted model simulation results it is not possible to guarantee the exclusion of errors. For example, the brfpp_mlt and the various brfop components do not contain any features that allow a correct re-positioning of the BRF values should the user have implemented the non-conform view angle conventions. Similarly it is not possible to know a priori whether fabs and ftran values smaller or equal to unity refer to percentages, or - incorrectly - to fractions. Furthermore no warranty os given on the correct generation of ROMC results if models are tested with functionalities that lie outside the scope of RAMI/ROMC, e.g., they generate specular peaks, etc. In case of doubt please contact the RAMI coordinators.

    If your graphics appear to have been cropped please look here


  8. Can I obtain my ROMC results in encapsulated postscript format? up

    Yes, you may choose to receive your ROMC results (accessible via the VIEW Results link in the rightmost column of the model table on the 'My models > My model performances' page) as colour (RGB) or black and white (B/W) encapsulated postscript (.eps) files. To do so, use the tick boxes located next to each plot type in the results table (to select a given experiment/measurement combination), or, in the rightmost column of the results table (to select entire rows), or in the bottom row of the results table (to select entire columns), or in the bottom right corner of the results table (to select all plots of all measurement/experiment combinations in the results table). You will be send a series of emails with a gzipped tar archive (.tgz) each containing encapsulated postscript plots up to a maximum of 1.5Mbytes. To retrieve your postscript files from these gzipped archives use winzip (or, WinRAR) under windows, or, do the following series of commands under linux/unix:

    tar -xv -z -f archive.tgz
    This will create a directory called RESULT in which you will find a series of (again) gzipped encapsulated postscript files. To view these .eps files change into the RESULTS directory and type:
    gunzip *gz
    Finally, under windows, use acrobat distiller to convert the eps file into a pdf file to view, whereas for linux/unix operation systems use ghostview, display or xv, for example, to view the .eps graphics files.

    Note, however, that results obtained in DEBUG mode do not qualify as model validation! This is because the ROMC's DEBUG mode features identical test cases as those that were made publicly available at the end of the third phase of RAMI. Users of the ROMC are therefore advised not to use DEBUG mode results to make public claims regarding the performances of a model. By contrast, results obtained in VALIDATE mode, however, do qualify as a means to document the performance of a given RT model and consequently may be used in publications for that purpose. In fact one of the goals of the ROMC is precisely to establish a mechanism allowing users, developers and owners of RT models to make the quality of their RT models known to the larger scientific community. By using results obtained from a series of 3D Monte Carlo models, identified during the RAMI community exercise, in order to generate a surrogate truth the ROMC provides an independent and speedy mechanism to evaluate the performance of RT models (in absence of other absolute reference standards). The ROMC thus encourages the usage of the provided VALIDATE mode results graphs in publications.

    Please note, that it is not permissible to modify, change or edit the results provided by the ROMC. This applies to ROMC graphs, statistics, and reference data which must all be used 'as is'. If you choose to include ROMC results in a publication or presentation their source should be acknowledged as follows:

    These results were obtained with the RAMI On-line Model Checker (ROMC) available at http://romc.jrc.ec.europa.eu/ (Widlowski et al., 2007).

    where (Widlowski et al., 2007) - click here for a LaTeX formatted bibliographic reference - refers to:

    Widlowski, J-L., M. Robustelli, M. Disney, J.-P. Gastellu-Etchegorry, T. Lavergne, P. Lewis, P. J. R. North, B. Pinty, R. Thompson, and M. M.Verstraete, 'The RAMI On-line Model Checker (ROMC): A web-based benchmarking facility for canopy reflectance models (2007)', Remote Sensing Environment, 112(3), 1144-1150.
    DOI:10.1016/j.rse.2007.07.016

    Prior to publication the registered ROMC user is encouraged to contact the ROMC coordinators (using romc(dash)webadmin(at)jrc(dot)ec(dot)europa(dot)eu) to ensure the correctness of the received ROMC results plots. Although the ROMC procedure aims at eliminating errors and inconsistencies, no guarantees as to the correctness of the automatically displayed results can be given (in particular if models are tested with functionalities that lie outside the scope of RAMI, e.g., they generate specular peaks, etc.). The responsibility for verifying ROMC results prior to any publication thus lies with the registered ROMC user, and neither the ROMC coordinators nor their institution accept any responsibility for consequences arising from the publication of unverified ROMC results.

  9. How do I reference ROMC graphs/results?  up

    Please note, that it is not permissible to modify, change or edit the results provided by the ROMC. This applies to ROMC graphs, statistics, and reference data which must all be used 'as is'. If you choose to include ROMC results in a publication or presentation their source should be acknowledged as follows:

    These results were obtained with the RAMI On-line Model Checker (ROMC) available at http://romc.jrc.ec.europa.eu/ (Widlowski et al., 2007).

    where (Widlowski et al., 2007) - click here for a LaTeX formatted bibliographic reference - refers to:

    Widlowski, J-L., M. Robustelli, M. Disney, J.-P. Gastellu-Etchegorry, T. Lavergne, P. Lewis, P. J. R. North, B. Pinty, R. Thompson, and M. M.Verstraete, 'The RAMI On-line Model Checker (ROMC): A web-based benchmarking facility for canopy reflectance models (2007)', Remote Sensing Environment, 112(3), 1144-1150.
    DOI:10.1016/j.rse.2007.07.016

    Prior to publication the registered ROMC user is encouraged to contact the ROMC coordinators (using romc(dash)webadmin(at)jrc(dot)ec(dot)europa(dot)eu) to ensure the correctness of the received ROMC results plots. Although the ROMC procedure aims at eliminating errors and inconsistencies, no guarantees as to the correctness of the automatically displayed results can be given (in particular if models are tested with functionalities that lie outside the scope of RAMI, e.g., they generate specular peaks, etc.). The responsibility for verifying ROMC results prior to any publication thus lies with the registered ROMC user, and neither the ROMC coordinators nor their institution accept any responsibility for consequences arising from the publication of unverified ROMC results.


Miscellaneous issues:


  1. Can I compare my different ROMC submissions against themselves? up

    In order to compare the results of your different ROMC submissions against each other, you have to click on the 'My models > Model Comparison' link in the left hand side navigation panel. This will bring you to the first of three pages required before ROMC will generate results graphs of the ROMC submission that you wish to plot against each other:

    • STEP 1: select the 'ROMC usage type' first (DEBUG or VALIDATE), and then the 'canopy structure type' (HOM, HET or HETHOM). Clicking on the 'Show available experiments/measurements' will present you with a page that features all the ROMC experiments and measurements belonging to the selected 'ROMC usage' and 'canopy structure' types.
    • STEP 2:select the experiments (left) and measurements (right) that you are interested in, using the 'shift' key and the mouse button to select adjacent entries and the 'Ctrl' key and the mouse button to select separate entries in these lists. When you click on the 'Find my ROMC submissions that include ALL these experiments/measurements' button the ROMC will check if one of all the ROMC submissions that you performed included all of the measurements and experiments that you selected. If none of your ROMC submission contains all selected measurements and experiments the ROMC will tell you and you have to change your selection.
    • STEP 3: select which ones of your models and submissions to the ROMC you wish to include in the subsequent ROMC results graphics (to do so tick the corresponding table column entitled 'one or more ROMC datasets'). By default your ROMC submissions will be compared against the ROMC reference dataset (last column in table). You can change this by ticking one entry in the second table column entitled 'one reference ROMC dataset. Note however, that a given ROMC submission cannot be reference and model at the same time. Finally click the 'Generate Intercompariosn results' button.

    A results page with various results graphics will be generated that now feature all the ROMC submissions you selected (an example is given below). If you chose different ROMC submissions from the same model then a pop-up legend will appear that relates the model name colour to the actual submission time at which that ROMC submission was performed. All graphs can be received in postscript form by selecting the graphs of inteerst and clicking on the 'send' button at the bottom of the page.

  2. Can I use the ROMC to generate graphs of the skill of my models? up

    In order to generate a graph with the skill values of one or more of your ROMC submissions you will have to click on the 'My models > Model Skill' link in the left hand side navigation panel. This will bring you to the first of two pages required before ROMC will generate the skill graphs you want:

    • STEP 1: select the 'ROMC usage type' first (DEBUG or VALIDATE), and then the 'canopy structure type' (HOM, HET or HETHOM). Clicking on the 'Show available experiments/measurements' will present you with a page that features all your ROMC experiments and measurements for which skill values are available and that belong to the selected 'ROMC usage' and 'canopy structure' types.
    • STEP 2: click on the measurements (top row) which you are interested to include in the final skill graph. Next select tyhe models and/or model submissions you wish to include in the resulting skill graph. Note that if a +/- appears in a table cell below one of your model names, this means that more than one ROMC submission of this model were performed. Clicking on the +/- will expand the table to visualise all ROMC submissions of this model. Once you have selected all model and/or ROMC submissions and measurements you wish to include in the skill graphs, click on the 'Create graph of selected model skill values'

    The result is a graph showing model skill (on a logarithmic y axis) - where skill=100 is perfect match and skill=0 is worst case scenario - for each of the measurements you selected. If you chose different ROMC submissions from the same model then a legend graph will be shown that relates the model colour to the actual submission time at which that ROMC submission was performed. The size of the plotting symbols is related to the number of experiments performed for any given ROMC submission. All graphs can be received in postscript form by selecting the graphs of interest and clicking on the 'send' button at the bottom of the page.

  3. Can I generate 'All Experiment' graphs from a subset of one of my older ROMC submissions? up

    If you have submitted a large ensemble of test cases, say DEBUG HOMOGENEOUS canopies in both the NIR and red spectral domain, you will have been presented with a series of graphs - in the 'All experiment' column of the resulting results table - that feature the contribution from all selected experiments and measurements. Assume that you would now want to have such a summarising graph but only for the cases in the red spectral band. To do so you will have to click on the 'My models > Model Comparison' link in the left hand side navigation panel. This will bring you to the first of three pages required before ROMC will generate results graphs of the ensemble of (red spectral band) test cases that you wish to plot on one single plot:

    • STEP 1: select the 'ROMC usage type' first (DEBUG or VALIDATE), and then the 'canopy structure type' (HOM, HET or HETHOM). In our example above you would have to select the 'ROMC usage' DEBUG and the 'canopy structure' HOM.Clicking on the 'Show available experiments/measurements' will present you with a page that features all the ROMC experiments and measurements belonging to the selected 'ROMC usage' and 'canopy structure' types.
    • STEP 2: select the experiments (left) and measurements (right) that you are interested in, using the 'shift' key and the mouse button to select adjacent entries and the 'Ctrl' key and the mouse button to select separate entries in these lists. In the example above you would have to select all the HOM_*_RED_* experiments that you are interested in (as well as the corresponding measurements). When you then click on the 'Find my ROMC submissions that include ALL these experiments/measurements' button the ROMC will check if one of all the ROMC submissions that you performed included all of the measurements and experiments that you selected. If none of your ROMC submission contains all selected measurements and experiments the ROMC will tell you and you have to change your selection.
    • STEP 3: select that one of your model submissions to the ROMC for you wish to obtain a new 'All Experiment' graphs (to do so tick the corresponding table column entitled 'one or more ROMC datasets'). By default your ROMC submissions will be compared against the ROMC reference dataset (last column in table). Finally click the 'Generate Intercompariosn results' button.

    A results page with various results graphics will be generated that now feature ONLY the ROMC submissions you selected. At the bottom of the results page in the 'All Experiments' row will be graphs that you are interested in. All graphs can be received in postscript form by selecting the graphs of interst and clicking on the 'send' button at the bottom of the page. Note that the above 'My models > Model Comparison' link will also allow you to plot several of your ROMC submissions against each other.

  4. Why do I need to pay attention to the reference plane when performing measurements? up

    All radiation transfer simulations are carried out with respect to a (typically horizontal) reference plane. Only those portions of the incoming and exiting radiation that pass through this reference plane are to be considered in the various ROMC measurements. The default reference plane within ROMC covers the entire test case area (known as the "scene") and is located at the top-of-the-canopy height, that is, just above the highest structural element in the scene. The spatial extend of the reference plane can be envisaged as the (idealized) boundaries of the IFOV of a perfect sensor looking at a 'flat' surface located at the height level of the reference plane. Changing the height, location and extend of this reference area will obviously affect your simulation results.

  5. What if I discover an error in already submitted results? up

    Once you submit your correctly named and formatted results to the ROMC they will be processes and published. There is nothing that you can do to stop or reverse this process. Nothing prevents you, however, to perform another test with your model. In DEBUG mode you will be able to select exactly the same experiments and measurements, whereas in VALIDATE mode, you will - unfortunately - be given slightly different experiments and/or measurements. If you feel that you can prove that you made a mistake, e.g., misnamed the collided as uncollided etc., please send an email to the ROMC coordinator who will investigate and if possible update your results pages.