Evaluating Method Engineer Performance: an error classification and preliminary empirical study
Keywords: empirical, evaluation, model, method, Batra, pilot study
AbstractWe describe an approach to empirically test the use of metaCASE environments to model methods. Both diagrams and matrices have been proposed as a means for presenting the methods. These different paradigms may have their own effects on how easily and well users can model methods. We extend Batra's classification of errors in data modelling to cover metamodelling, and use it to measure the performance of a group of metamodellers using either diagrams or matrices. The tentative results from this pilot study confirm the usefulness of the classification, and show some interesting differences between the paradigms.
How to Cite
Kelly, S., & Rossi, M. (1). Evaluating Method Engineer Performance: an error classification and preliminary empirical study. Australasian Journal of Information Systems, 6(1). https://doi.org/10.3127/ajis.v6i1.316
Copyright (c) 1969 Steven Kelly, Matti Rossi
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.
AJIS publishes open-access articles distributed under the terms of a Creative Commons Non-Commercial and Attribution License which permits non-commercial use, distribution, and reproduction in any medium, provided the original author and AJIS are credited. All other rights including granting permissions beyond those in the above license remain the property of the author(s).